I'll stop re-sieving too. I didn't get that far though:
3911540412763 - 3912478299240
I've submitted the factors that I've found.
Hi MikeH,
Of course, there is no problem with that. At the end, we are all trying to reach the same goal.
The reason I did not declare these two ranges (3900G-4200G and 3650G-3700G, both with couple of holes) is that there seemed to be nobody else looking for holes. In other words, I did not think anybody even noticed them. So, I directly contacted Louie not to confuse anyone.
Louie told me that he had scheduled to take care of his 3900G-4200G already, so I just started sieving the 10 holes in 3650-3700 range (one already finished and three are in progress).
And as soon as I noticed that somebody else is looking for holes, I wrote the coordination post above.
If anybody wants to take any of the remaining six holes, just let me know which one you are taking (so that we don't duplicate effort). If not, I am already patching them. Both alternatives are ok for me.
I'll stop re-sieving too. I didn't get that far though:
3911540412763 - 3912478299240
I've submitted the factors that I've found.
I see there is a lot of talk about "Sieving" and Factors". I have no idea what these are. Is it something that becomes obvious when you get your degree in math?
I have run the SoB client for over 300 days. Now I see there is yet another client. When did this start and is it important?
Is there a written explaination of all this? Something that I could understand? I was never good at math. I took an Algebra course once in HS. I can't recall if I passed it. That happened 40 years ago. Maybe they have some new math now.
Colin Thompson
Don't know about any paper or web page about factoring or sieving, but let me try to explain things a little.
What SoB tries to do is to find prime numbers by performing a probable primility test, but most numbers turn out to be composite. Another way of proving a number is composite is trying to find a factor of that number. What we try to do with sieving is trying to find 'small' prime factors of all the candidate numbers, just like GIMPS does with trial factoring. But instead of trying to factor one candidate at a time, the sieve takes a prime numbers and checks if it divides any of the candidates left.
At this moment we are testing the remaining 12 K values for N is between 3.000.000 en 20.000.000
This is a total of 204 million numbers. Most numbers can be divided by small primes, so we don't have to perform a prp test on them (a single test will take days on the higher N's). At this moment there are about 630.000 numbers left of which the status is unknown. (You can see this number on the project stats page). As long as sieving elliminates more numbers in a given time period then prp testing, it's worthwhile to continue sieving.
There is a sieve program available on the website, but it requires manual configuration, reservation and returning of the results, and you won't find a prime with it.
It's important that a small group continues sieving, but it's certainly not maint to let people stup running the SoB client. Just do what you think is the most fun.
Thanks for the info, smh. I'll pass on sieving and continue running the SoB client
Colin Thompson
Hey, where are the new clients? There hasn't been one in
3 weeks . Come on, I want my SoBSieve 1.25 and
NBeGon 011 (preferably with a >20% speed increase)
To celebrate the start of the PRP testing of n>3M, here is a little summary of the sieving efforts to date. I have used the latest results200.txt and results.txt to generate the numbers below.
The range 1G<p<200G is 100% complete.
The range 200G<p<5T is ~97% complete.
The range 5T<p<10T is ~48% complete.
Quite an accomplishment!
I've also had a stab at producing some (very unofficial) stats.
I have scored as follows. Each unique factor for a k/n pair scores (p/1T) e.g. a factor where p=825G will score 0.825. Only the lowest p for a k/n factor scores, duplicates at a higher p do not score. Factors for n<3M are not scored. Factors for n>20M are not scored.
Here are the top 5 for users and teams. I've attached the full stats. For column headings, FacU is number of unique factors, FacD is number of duplicate factors, everything else should be reasonably obvious.
UserId UserName Score ( %) FacU ( %) FacD n<3M 3M<n<20M n>20M
1 louie 48041.38 (28.98) 144251 (53.62) 7197 507 143744 0
0 unknown 16607.47 (10.02) 9392 ( 3.49) 1473 108 9284 0
1608 MikeH 12826.83 ( 7.74) 8123 ( 3.02) 1466 0 8123 0
1577 OrkunBanuTST 7427.97 ( 4.48) 8843 ( 3.29) 677 0 8843 0
627 Antiroach 6368.77 ( 3.84) 1795 ( 0.67) 136 0 1795 0
TeamId TeamName Score ( %) FacU ( %) FacD n<3M 3M<n<20M n>20M
3 Michigan 53512.53 (32.28) 155805 (57.91) 8406 678 155127 0
0 unknown 46657.17 (28.15) 39008 (14.50) 4532 674 38334 0
81 TeamPrimeRib 10243.45 ( 6.18) 14704 ( 5.47) 1352 14 14690 0
9 Rechenkraft 9565.25 ( 5.77) 11339 ( 4.21) 1505 3 11336 0
53 Anandtech 6914.17 ( 4.17) 3390 ( 1.26) 461 21 3369 0
There seems to be a slight stats glitch (on the SoB HP):
According to the stats, 3111 tests are pending and there are 3102 tests <3M remaining. That would indicate that there already n's >3M handed out.
But a look at the test window (which is 7 minutes younger) shows that we're still 4K away from that range - which translates in 100-200 tests still to do...
Nice job MikeH!
The only thing that I would like to point out to everyone, is that some of the "holes" are still being worked on. i.e. the people reserving them have not reported them complete. Louie is taking care of another range (see the posts up above (7 or8 or so)). So it may be to soon to start resieving the holes! Maybe, let the people who did it originally, redo it, for now at least?
Joe O
Thanks Joe,
You're dead right on the 'holes'. I'd only included the files just so that anyone that's interested could see where the missing 3% and 52%'s were. I haven't looked at the 5G-10G file, but certainly for p<5G I think there are no holes which are unaccounted for.
Many thanks for pointing that out. The last thing I'd encourage is duplication of effort.
Mike.
Very nice job Mike.
Both the hole finder and the stats are cool.
If someone likes, he can recheck the range 7086 - 7089.
I had a hole between 7086093056041 and 7088983041311 -> almost 2.8 G big. The distance between to successive factors is usually a lot smaller...
I have got a SoBSieve 1.25 here that I have developed... it is a little faster than 1.24 - around 15% faster in tests with this 466 MHz PIII at p=1T.Originally posted by Moo_the_cow
Hey, where are the new clients? There hasn't been one in
3 weeks . Come on, I want my SoBSieve 1.25 and
NBeGon 011 (preferably with a >20% speed increase)
I suppose I ought to release it now!
Regards,
Paul.
If someone likes, he can recheck the range 7086 - 7089.
I'll check that and tell you if I found something
SoBSieve 1.25 is slower than the console version and takes much more memory (and more than 1.24), at least on my Tualatin.
I would like to see a console version where I could change the time between screen output .
Thanks Paul!
I didn't check the memory usage, but on a PIII/500 and a Celeron/450 it's slower than the console version.
Any chance for a faster console version?
Joe O
SoBSieve 1.25 is 1% faster than 1.24 on my P4-1700, at p=6T
v1.25 is 7% faster than 1.24 for me.
Someone should do a graph of the relative speeds of all the sieves.
It would be neat if someone made a graph showing the first time each version of each sieve (nbegon, sobsieve, sobsieve console) was posted to the forum vs how long it takes for that version to sieve a 1G range of factors (say around p=10T) on a constant arch (with a properly set alpha factor for each version just to be fair).
Wouldn't be too much work. All the versions and dates are on the forum or Phil's site.
Everyone loves time vs. time graphs.
-Louie
Ah, you are all speed freaks rather than GUI freaks! Well, that is absolutely fine by me....Originally posted by Joe O
I didn't check the memory usage, but on a PIII/500 and a Celeron/450 it's slower than the console version.
Any chance for a faster console version?
...I just did a build of the Console version, and it was no faster. That is not much of a surprise, really, as a lot of the extra speed came from changing the compilation options (I actually turned *off* compiler optimisations!).
Regards,
Paul.
Maybe you can enhance the "comfort" of the console version if speed is already at max...
Some things that came to my mind after some weeks of using:
- shorter save times
Right now I have to manually copy the current P value into the config file - at least on that computers that don't hibernate...
Alternatively a key combination to write the current status to file and exit - if implementing this handler doesn't affect performance.
- option to fine-tune alpha setting
I guess here one can squeeze out an additional percent or two. According to my observations, the optimal value depends on the system architecture and the search range seems to have a slight influence, too.
- fewer status outputs (optional)
As speed is really adorable, I get 1.5 status updates per second on some systems. I'm not sure if there will be a performance improvement when the output rate is reduced, but I guess it's worth a try.
- output of "clean" factor file
When computation of the set rate is completed, the program could create a new file and paste the factors of the config file. Maybe an overwrite protection is ideal to prevent overwriting of factor files not yet submitted...
OK, that might be possible.Some things that came to my mind after some weeks of using:
- shorter save times
Right now I have to manually copy the current P value into the config file - at least on that computers that don't hibernate...
Alternatively a key combination to write the current status to file and exit - if implementing this handler doesn't affect performance.
Ah, there is a -a=<value> command line switch to change the alpha. 2.5 is used by default. Ah, but that doesn't seem to work...- option to fine-tune alpha setting
I guess here one can squeeze out an additional percent or two. According to my observations, the optimal value depends on the system architecture and the search range seems to have a slight influence, too.
Hardly *any* of the time is spent doing IO. I could reduce it, but the net effect would be about a 0.001% gain.- fewer status outputs (optional)
As speed is really adorable, I get 1.5 status updates per second on some systems. I'm not sure if there will be a performance improvement when the output rate is reduced, but I guess it's worth a try.
There is another approach that I could use, but backward-compatibility would be a problem. Are you not using the small utility that was posted up here to clean the file before submitting it?- output of "clean" factor file
When computation of the set rate is completed, the program could create a new file and paste the factors of the config file. Maybe an overwrite protection is ideal to prevent overwriting of factor files not yet submitted...
Regards,
Paul.
If someone likes, he can recheck the range 7086 - 7089.
I found another factor in the interval
: 7088930748583 | 24737*2^14280367+1
But I think it may be a duplicate
Thats strange, 24737*2^14280367+1 is divisible by 5 (according to newpgen 2.8), so this number shouldn't show up in any dat file.
If you mean the Java application - I wrote it.Are you not using the small utility that was posted up here to clean the file before submitting it?
Sure, it works, but I thought it would be even more comfortable to skip that part. Would be one double-click less...
Just copy the results directly from the SoBSieve window. Why complicate things unnecessarily?
That number isn't divisible by 5. I just checked it with GMP. Maybe you checked it without the +1 or some other NewPGen mode?Originally posted by smh
Thats strange, 24737*2^14280367+1 is divisible by 5 (according to newpgen 2.8), so this number shouldn't show up in any dat file.
Here are the factors in the db for it.
58828348459 | 24737?2^14280367+1
7088930748583 | 24737?2^14280367+1
-Louie
Don't know what i've been doing wrong. I couldn't imagine that it was divisible by five.
Just checked again with newpgen 2.70, and it doesn't divide by 5. newpgen 2.80 crashed while testing.
You are right about NewPGen 2.80 crashing - thanks for spotting that. It happens when there is only 1 number in the fixed k sieve. I have fixed it now.Originally posted by smh
Don't know what i've been doing wrong. I couldn't imagine that it was divisible by five.
Just checked again with newpgen 2.70, and it doesn't divide by 5. newpgen 2.80 crashed while testing.
Regards,
Paul.
How about a "Done" in the SobStatus.dat file when it finishes?
It does display it on the console!
Joe O
Ok, I just rechecked that factor on the very same machine that let it slip the first time. Now it did find the factor.I found another factor in the interval:
7088930748583 | 24737*2^14280367+1
Seems like it was a computing error...
THX expinete
Maybe I am being presumptous, but I started my little 533 Alpha on the 3560 - 3570 range that Sonicbadger may or may finish. it will take me a week to do that range, and maybe Sonicbadger will have it done by then, but if not, I will. The first factor in that range that I found in the first ten minutes that had not been reported yet, so I am going to bet that it isn't being done.
Someone tell me to stop, and I will. I understand that the lower ranges have more factors, and so, I figure this is more productive than picking up a block in the 9000+ region.
--
OberonBob
Team ORU
(originally posted on the n<3M double check thread)
I have updated and improved (slightly) my unofficial sieving stats . They can be found here .
I now have an 'excluded' category. I am using a 100<n<50M Sob.dat file sieved to 1G as the test for validity. Excluded's don't score.
Scoring is also slightly different:
n < 3M, score = p/1T * 0.5 * ((n*n)/(3M * 3M))
3M < n < 20, score = p/1T
n > 20M, score = p/1T *0.05
duplicates score = score * 0.01
This won't be updated daily, but I'll make sure it's more often than weekly.
Mike.
I think that sonicbadger's range should be sieved ASAP because we're already at n> 3 M.
This post is for coordination purpose.
I suspect that there is a 25G hole within the following range.
pmin=8024924676959, pmax=8050234552069
8010G-8050G is marked as complete (8010 - 8050 jimmy [complete]), and no new factors were submitted for 8025-8050 range within the last two results.txt files.
I'll wait for another day to avoid duplication of work (ie. to be sure that jimmy did not submit the factors for that range and/or somebody else already started resieving without notice).
If any one of you are interested in resieving that range instead of me, please feel free to post below and I'll hand it over. If not, I'll be sieving it anyway.
Regards,
Nuri
I want to help you. I propose to split it in two parts, one for you and the another one for me - this way it would be finished faster. Do you like this proposal?
Ok.
I will reduce my range to 8025-8035. Plese feel free to take 8035-8050 Troodon.
But please, wait one more day before starting the resieve (to be sure that it was not sieved).
Happy sieving.
OK, I'll start (re)sieving the range 8035 - 8050 after 86400 seconds . Thanks!
Be sure to check the results.txt file tomorrow before starting.
Hi,
I'm sure, I sieved this range.
I splitted the range: 8010 - 8025 were sieved at home and 8025 - 8050 were sieved at work. So I will check my comp at work tomorrow an will answer tomorrow evening (germany time). So please wait resievieng until this time.
I' m sorry about the trouble,
Jimmy
BTW: My English is not the best....
It's no trouble at all Jimmy.
It's nice to hear that the range is already sieved. We saved roughly 70 candidates more without any additional work.
Please let us know the result.
Happy sieving all.
Nuri