I'm currently away. I'll be back some time after new years. My range will (hopefully) be done by then...
How did you guys manage to get them done so fast? Multiple computers?
800-900G will be done today.
I will start on the 1400-1500G range.
I'm currently away. I'll be back some time after new years. My range will (hopefully) be done by then...
How did you guys manage to get them done so fast? Multiple computers?
At the moment my AMD XP2000+ is doing the job.
At over 1040G the system crunches more then 1G per hour.
So 1T to1.1T only needs about four days for me.
Lars
Just a quick question about the new beta version of NewPGen (working fine by the way), SSE2 code has not been implemented yet, right?
Looks like it'll take about 5-7 days for me to complete the range.
No idea actually, but i don't think so. The what's new section doesn't say anything about SSE2.Originally posted by Bellbox
Just a quick question about the new beta version of NewPGen (working fine by the way), SSE2 code has not been implemented yet, right?
We're below 4000 (3998) remaining numbers now. Below 1T only the range 500 - 600 is missing. We've got to wait 'till newyear for that.
I can't edit my posts, so here is the list with ranges in progres again
500 - 600 ceselb
1000 - 1100 LTD
1100 - 1200 SMH
1200 - 1300 Bommer
1300 - 1400 Bellbox
1400 - 1500 Bewolf
What is it that you all are doing?
Is this some project within the S0B project, or something unrelated? There seems to be a whole lot of activity with this sieving, and I don't see anything about it on the SoB web site.
In short:Originally posted by Angus
What is it that you all are doing?
We try to find factors for all numbers 4847*2^N+1 with N between 2 and 3 million. Most of these numbers can be divided by a small prime. If this is the case for a particular N, the we don't have to do a test with the SOB client since it can't be prime anymore. At the moment, of the 1 million N's there are only about 4000 numbers left with the SoB client.
When the tests that are assigned are near 2M, i'll send the results to Louie who can remove the numbers from the database.
At the current rate, 2006030 will be the N value in 4.5 days.Originally posted by SMH
When the tests that are assigned are near 2M, i'll send the results to Louie who can remove the numbers from the database.
Joe O
1400 to 1500G is done with 11 numbers removed.
That averages out to one number every 8.2 hours. IIRC, the SoB client takes 12-15 hours for one test on my comptuer. There is still some advantage to sieving, but not much.
I suppose the 12 to 15 hours is for exponents around 1,8 or 1,9M? Don't forget that numbers double the size take about four times longer! So sieving is still 2 times faster around 2M, it will be 4 or 5 times for exponents around 3M! And i guess you have a P4? If so, the advantage of sieving will be much greater due to the lack of SSE2 code in newpgen (at least i think it still lacks the SSE2 code, if not, it doesn't matter much which cpu you are running on).Originally posted by Bewolf
1400 to 1500G is done with 11 numbers removed.
That averages out to one number every 8.2 hours. IIRC, the SoB client takes 12-15 hours for one test on my comptuer. There is still some advantage to sieving, but not much.
If the client could be made 20% faster, that would be a big improvement, but sieving is at least 100% faster. I do call that a lot!
500-600G is done, 35 numbers removed.
Is anybody working on 1500-1600G yet, or have we sieved far enough?
As far as i know, nobody is sieving 1,5T-1,6T yet.
Sieving is still a lot faster, so it's definately worth sieving further, otoh, we're almost at N=2M so i think i need to send the results so far pretty soon.
My 1100G to 1200G range is at 1156, but i don't have spare cpu cycles to push this further, so you might want to continue from there too.
Ok, I'll start at 1156 using the beta version of NewPGen.
Over the holidays it dawned on me how easy it would be to make a web form to allow people to submit factors.
Since a factor is easy to verify, I could allow anyone to submit them. I may allow people to submit with a username if they wish and then numbers factored could be part of people's stats again. As it is, there is already a spot in the database for factors so it's just a matter of writing the script. I'll let you all know when I (or Dave) get such a script working.
Maybe Paul Jobling will release the multiple k sieve version of NewPGen if he knows that everyone can use it. As it is, I have been having some problems making his code thread safe for my cluster siever I was planning. His code is great, but it was designed for speed... not threading.
As for the current sieving... Sander, if you have a full delete file or sieve file for 4847 it would be convient if you could email it my way. Payam tells me that the new numbers are also at http://sierpinski.insider.com/4847
Anyway, I'll let everyone know when I make a factor submission/verification form. Maybe once that's done, there can be a more central spot where people can "reserve" ranges, download realtime updated sieve files, get links to paul's siever, and instructions on how to use it.
-Louie
Paul sent me this on December 18th and asked me to forward it to the list, but I was out of email contact over the holidays:
Hi Phil,
still can't post in the SoB forum, so could you put this information in there
for me?
I have produced a cut-down version of the fixed k sieve in NewPGen, and I have
enhanced it to sieve more than one k value at a time. I have also tinkered
with it a little, with the result that in the same amount of time, this can
sieve about 50% further than NewPGen 2.70.
I have sent the source code to Louis, and while he can't redistribute the
source, he can produce and distribute executables based on the source code.
The code can be built on Windows and Linux. Since it makes very extensive use
of x86 assembler, it will not be possible to produce a fast Solaris port.
Initially Louis was intending to use his 40 processor Linux cluster to do the
sieving, however there is nothing to prevent this being built into the client
at some point in the future, should the SoB coordinators decide that would be
useful.
I believe that the intention is to use this to sieve n values above 3 million
for the 13 remaining k values. It could be used with the remaining n values
below 3 million, but to be useful they should all be sieved to the same level
first.
Regards,
Paul.
I came here to give good news. All the sieve results that sander sent me have been manually merged into the database. this removed quite a few tests for 4847 so that's always good. I also added tests for k=28433 between 2mill and 3 mill.
So today marks the first day that either of these k values has been assigned. This means that SB is now officially testing all remaining k's, not just 10 out of 12. Hopefully this will help us find more primes soon.
-Louie
@Louie: A central point point to reserve ranges for sieving would be really cool. I can't wait to see it happen.
@Sanders: Your right, I looked back at my logs and the n's that took 12-15 hours were all around 1.5M. My latest block has an n of 1.9M and it looks like it will take 28 hours to complete. So sieving is still a whole lot faster.
Louie,
Are you purposely retesting the numbers Payam already tested or did you forget to remove them?
Sander
Is proving the k+2^n dual problem only has 28433 left to check the exact same as what we are doing?
In other words, do we really have to FIND the prime for the non-28433 numbers even if there is proof they already exist for some N ?
I am good at math, but this is getting out of my league
A "yes" answer to an either/or question? Hm...
I'm quite interested in an illustration of that "negative continuation" stuff. Are the same k/n pairs primes (seems like it's not the case - so where IS the connection)? And as I'm afraid I'm not deep enough into prime theory, could someone tell me why it's easier to find those duals?
This seems a bit off-topic for this particular thread, so maybe if we want to discuss it further, we could start a new one. However, this topic was discussed on the primenumbers group several months ago, and a search there on "78557" turned up several interesting posts to that group. You might start with this particular one:
http://groups.yahoo.com/group/primenumbers/message/7042
which proves, not only that 78557*2^n+1 can never be prime, but also that 2^n+78557 can also never be prime. The relation of the two problems is discussed in other posts as well, in quite a bit of detail. The eventual conclusion is that elimination of a k-value from the dual Sierpinski project makes it extremely unlikely that it can be a Sierpinski number, but it does not prove it.
There are 3 different problems:
(1) original Sierpinski problem, Which asks about the smallest k, such that k*2^n+1 is composite for all n. SB have sucessfully removed 5 hard candidates out of 17. (a really great work which must be continued)
(2) dual Sierpinski problem , Which asks the same question for the sequence k+2^n, which could be considered as the negative continuation of the Proth sequence k*2^n+1:
k*2^(-n)+1=k/2^n+1=(k+2^n)/2^n
for this problem there remains 8 candidates to be removed:
2131, 8543, 28433, 37967, 40291, 41693, 60451, 75353
see http://sierpinski.insider.com/dual
(3) mixed Sierpinski problem , asks the same question for both the sequences k*2^n+1 and k+2^n, simultaneosly.
for this special problem there remains only one candidate to be removed, namely k=28433.
Payam
It is not easier when we are considering the same n.Originally posted by Mystwalker
could someone tell me why it's easier to find those duals?
There is some minor situations which must be considered:
(1) the easy making part: The original candidates have been searched up to high n limits, while the duals are not, yet.
(2) the difficult making part: The pfgw which is used for finding duals is a little slower than PRP or SB which is used for finding Proths.
for more details you can visit http://sierpinski.insider.com/dual
Payam
1156-1200G done, 6 numbers removed.
Starting on 1500-1600G.
I sieved the Range from 1200G-1300G and removed 21 numbers.
Cu Bommer
Should we organize an effort to sieve all k's from n > 3M, or is that being integrated into the SOB client?
Just got caught up on this thread.
Good work guys.
IIRC Louie is testing a program that can sieve all K's at once. Haven't heard much on how he is doing.Originally posted by ceselb
Should we organize an effort to sieve all k's from n > 3M, or is that being integrated into the SOB client?
I think the problem of making the sieving distributed is that it's most efficient to sieva a rather large range of N's. At least upto 10M but maybe 20M.
12 K's times 13M values, thats a lot. After sieving a couple of G a wild guess would be that there are still over 1M values left to test. The newpgen output format is in plain text, so it will be several Mb to start with. Even compressed this will still be a lot of data to send to all clients.
Sure, it's a large download, but you'd only need to get it once. After that you'd update/remove values from the *.del files . If nobody is willing to automate this I'm sure it could be done manually, either here or by mailinglist (egroups?)
I agree with ceselb, even though the list of numbers not yet eliminated by the sieve would be large, it would not have to be downloaded very often, and it seems that so far, the number of people interested in doing sieving is relatively small. I made a rough estimate of how many numbers would be left if all exponents from 3,000,000 to 20,000,000 were sieved to 400G for all 12 remaining k-values, and came up with about 700,000. Two bytes per number means a 1.4Gbyte file, and if instead of sending all n-values, only a difference in successive n-values was sent, this could be compressed by at least a factor of 2. Increase the range to 3,000,000 to 30,000,000 and we get around 1,100,000 remaining exponents. Of course, we can hope that eliminating a couple of k-values will reduce this size in the near future!
No, it doesn't. I'm bored so I sieved all 12 remaining k's with m from 3M - 20M to p=10000. These files are 29.4MB totally, 5.37MB zipped. Not very large at all.wo bytes per number means a 1.4Gbyte file
Last edited by ceselb; 01-08-2003 at 01:35 PM.
As a test I sieved k=4847 for n=3m to 20m up to 4.4G at which time there were roughly 86,000 number left. The newPGen file was 1.2MB. Assuming this was an average for all k values, a file for the remaining k values would about 14megs after only 4G. This can be zipped down by 80% so the remaining file would be less than 3megs, which isn't to bad to download.
I think the harder part would be organizing the sieve. A webpage for reserving ranges would be the best (even better if it generated the input file for the range). Keeping track by hand would be tedious even with a handful of people.
Sorry, I meant MB when I said GBytes! But the input file would be the same for all sievers, since they would all be sieving the same range of n values.
I have a zip of all k's done to p=2M, here (3.54MB) if anybody is interested.
It seems like Louie will take care of the organization, so it won't be a serious problem. He wrote;
"I'll let everyone know when I make a factor submission/verification form. Maybe once that's done, there can be a more central spot where people can "reserve" ranges, download realtime updated sieve files, get links to paul's siever, and instructions on how to use it."
I didn't participate in the 4847 sieve because I could't help much for 100G chunks (unfotunately I have only one PC and it's CPU time is already distributed between a three projects including SoB and GIMPS).
I'd love to participate when Louie launches the web form (especially if it enables reserving less than 100G for participants like me). I'm sure some other people will also join when it's launched and announced at the news page.
I didn't want to have too small ranges when i collected the results on K=4847 because i was doing everything by hand. 50G would have been okay too, but if you'll have to do it by hand it really gets hard to keep track of what ranges are done and which files are merged etc.I'd love to participate when Louie launches the web form (especially if it enables reserving less than 100G for participants like me). I'm sure some other people will also join when it's launched and announced at the news page.
I just took a look at the sieving resulds of ceselb.
When sieving upto 2M there are a total of 1278701 (+/- a few) numbers numbers remaining. So my guess wasn't that bad
I tried the K with the most numbers left (55459) and when i started sieving i was removing almost 150 numbers per second. (P4@2,4Ghz), dropping to 65 numbers per second when sieving to 5M (and removing over 10000 composites). So i think we would be able to get this down to less than a million in not too much time. But i don't know how far Louie is in implementing things, and if it makes sence to give the remaining K's a good headstart.
Yikes, sorry guys, I have been having problems with the PC I was using to sieve and I won't be able to continue with the sieveing until I get it back up and running. Left it running over the winter break, and when I got back it was toast. It got up to 1340G, so I'll post the files here if anyone cares to continue the range up to 1400G.
Sorry once again, and I'll be back as soon as I can figure out what's up with the stupid sieving PC.
The removed numbers:
p=1316562430213 divides n=2443743
p=1318610142973 divides n=2531703
p=1318880710747 divides n=2415447
p=1339017372383 divides n=2717703
Yeah. Paul has finished a stand alone windows program that does the multiple k sieve. He sent me a copy and I'm very excited about it. Not only is it faster than NewPGen 2.70... it is also capable of sieving all k-values at once. This is huge improvement over single k-value factoring. This will make sieving to even higher ranges a viable, and intelligent option. If enough people sieve, this could have a decent impact on overall system throughput.
Right now we're trying to decide on a resonable sieve range and finishing up the web submission page. I'm thinking n = 3 - 5 million. There is still time to change the "official" sieve file but don't get too hung up if I decide to limit the n range. By the nature of the sieve being public, you don't have to use my "recommended" range if you don't want. If you sieve a larger range, that will work too. Basically, do whatever you want to do to find the factors that you want to find. There's nothing even saying that you need to use the program Paul made for us if you work for the NSA and know a better way to factor these numbers. That's the beutiful part... factors can be obtained however you choose to obtain them. It's incredibly trivial to verify them. In fact, I have finished a factor verification backend already. It's like 20 lines of code. Seriously.
When Dave has time, he's gonna help me integrate it into a Mason script to parse down results. At that point there will be a new sieve page on the site. It will have instructions, a link to download the multi-k siever, and two submission forms. One will be for the regular siever results. It will be a multi-line text box with a submission button. When you see the siever, you'll understand how easy this will all be. The other submission page will be for old sieve results from NewPGen... it will have the multi-line text box and also a place to specify the k value the results are for. And for the begining, I'll have a direct link to the forum so people can coordinate the sieving effort until I can make a range "reservation" signup and such.
I know you're all excited to get this stuff up and running but it will probably be a few more days yet. I'm excited too but I want to give things a chance to test out and solidify before public factor submission begins. And we still need to write most of the sieve page.
When it's done, you'll all be the first to know. There will be an announcement on the homepage probably too.
-Louie
Very, very cool.
I anxiously await this being finalized so people can contribute to finding factors. Especially if it helps to increase the overall progress of the project.