The older thread is closed. Enjoy. :cheers:
Printable View
The older thread is closed. Enjoy. :cheers:
All DoubleCheckers(1<n<3M),
Just a reminder, you can continue sieving, but don't submit yet. We (Louie, MikeH, Joe O. et al) are taking a checkpoint, to prepare some files for Louie.
Edit: Thanks Nuri! A good point, even though the thread title should alert people.
And, to avoid any confusion, this is for double check sievers only.
All others, please continue as usual.
Louie,
You asked for 11 files, 1 for each k, with 1 n per line, up to but not including the n bound: lower. Here they are.
Quote:
if someone makes the 11 remaining text files n values of each k (except 55459 of course) that would help me. you should be able to tell what n value to end the files from the page http://www.seventeenorbust.com/stats/rangeStatsEx.mhtml under "n bound: lower". that basically tells the lowest n that SB tested for that k.
I won't be participating in this sieve double checking project,
but I suggest that you modify the SoB.dat file to sieve
100K<n<3M instead of 1<n<3M. I think that this will give
your project a 2% speed increase, and the numbers for
1<n<100K can be checked extremely fast by Louie.
(It took me just 4 min to check 1<n<10K for all the 12 k's)
I've checked the data I submitted, and you have successfully eliminated it from the file. I'm going to start using your new SOB.DAT file.
Again, I'm sieving but not submitting until after we hear from Louie.
I'm sure you have statistics, here are mine:
K, NLeft, NSeed, NLow, NHigh
4847, 11974, 8031, 2247, 2999967
5349, 14769,4315, 622, 2999710
10023, 13648, 2764, 509, 2999909
19249, 4930, 2170, 1166, 2991038
21181, 11666, 3152, 1148, 2999492
22699, 4766, 1437, 1414, 2997622
24737, 12175, 1204, 607, 2999887
27653, 6832, 750, 1257, 2999769
28433, 6582, 4445, 553, 2999425
33661, 11650, 2529, 432, 2999688
55459, 16475, 0, 1018, 2999938
67607, 4179, 570, 531, 2999547
NLeft - Number of N left to factor
NSeed - Number of N sent to Louie to seed the DataBase
NLow - Lowest N left to factor
NHigh - Highest N < 3M left to factor
all numbers added. :)
only changes I made before adding the numbers was to remove tests with n < 1000 from the files (SB sometimes crashes with extremely low tests) and I removed these tests for k=4847
2000487
2000607
2000631
2000727
2000943
2001327
2001471
2001543
2001831
2002071
2002143
2002407
2002503
2002767
2003151
2003463
2003727
2004951
2005551
2005647
2005671
because each of these tests would have been assigned to regular users and not "secret" the way the server is setup right now. it is only 21 tests so there's a good chance I could just slip them in and they would finish before most users even noticed they had them but regular users may not be interested in doing double-check work now so i won't make them.
if secret actually burns though all the work it has now, which i think will take at least a few weeks, then i'll manually assign the above tests to myself (or someone who's interested) and do them just to patch any holes in the ranges. also, i highly doubt that those are prime... they were checked the first time by Samidoost and he posted residues for all of them. He'd be the last person i'd expect to miss a prime. ;)
anyway, you can submit factors for the double-check again now. and have fun doing a few dozen proth tests an hour for the next few days if you decide to join in on the "secret". :)
-Louie
A few dozen an hour? Currentely a test doesn't even take a second :-) The client needs more time to connect to the server and request a new test at the moment.Quote:
anyway, you can submit factors for the double-check again now. and have fun doing a few dozen proth tests an hour for the next few days if you decide to join in on the "secret".
Fun to see the N go up so fast. It was at 6000 a couple of minutes ago, now it's already at 9000. Unfortunately, things will slow down pretty soon.
How many tests are available for 'secret' (available and complete?)
one little caveat for anyone who does secret testing in the next couple days, i recommend you run a sieve in the background too.
reason being the server will actually rate limit your downloading of work units because it will think you're finishing blocks too fast... so it may spend as much time waiting for workunits as it will actually running them.
what i'm doing is sieving a small range at idle priority and then running secret on SB with slightly higher priority. this gives 100% of the CPU to SB when it has a test and 100% to the sieve when it's in between tests waiting for the networking wait to time out.
i'd recommend a similar setup for those of you who feel inclined to do secret tests for the next couple days.
-Louie
quote:
_________________________________________
reason being the server will actually rate limit your downloading of work units because it will think you're finishing blocks too fast... so it may spend as much time waiting for workunits as it will actually running them.
___________________________________________
I ran the "secret" account for a while for the fun of it,
and I had to wait 4 min for just 1 WU :eek:
Is this normal, or is the server simply overloaded?
Just fired up secret. Running for two minutes, it took 7 seconds CPU time. :D
Strangely, the client utilizes max. 33% of the CPU. Maybe there are other bottlenecks while processing that "small" value?
the rate limit is by user. the more people that are running secret, the more noticable it will be. i just noticed it got a lot worse here... expect it to not matter in a few hours i'd guess.
run a sieve in case secret gets banned/limited in that time.
-Louie
Just saw that the user stats for secret seem to be not filled out thoroughly...
As no date has been inserted, the script assumes the start date as 1-1-1970. :)
The profile doesn't look good either:
One could think he's a mere phantom! :DQuote:
System error
error:_
no such user at /usr/local/apache/htdocs/sb/data/getProfile.mc line 28.
context:_
..._
24:_
</%args>
25:_
26:_
<%perl>
27:_
28:_
my $profile = ($m->comp('../db/select.mc',
29:_
tables => [ 'profiles p' ],
30:_
fields => [ 'p.userID', 'p.name', 'p.location',
31:_
'p.email', 'p.gender', 'p.age', 'p.comment' ],
32:_
where => [ 'p.userID = ?' ], params => [ $userID ]))[0]
..._
code stack:_
/usr/local/apache/htdocs/sb/data/getProfile.mc:28
/usr/local/apache/htdocs/sb/profiles/index.mhtml:26
/usr/local/apache/htdocs/sb/autohandler:44
/usr/local/apache/htdocs/sb/syshandler:18
Update: Just reached n=40K :|party|:
Finally completed 450-500. Currently submitting the results. I'll post the dat file in sieve coordination thread.
I'm coming for the secret test as well. Good luck every secret user. :cheers:
BTW, at which point would you consider to a good time to update the SoB.dat? My vote is for n=300,000. This is soon enough, and also the lowest n lower bound was 300,127 (for k=24737). We might get another update when the double checking of the remaining k's finish.
What do you think?
I vote for n=400,000 mainly because the n lower bound for
k=67607 (which has the lowest proth weight) is 400,000
On the other hand, I vote for the SoB.dat file for the other
sieving project to update at n=5M
Out of curiosity, how many "secret" tests have you done?
I've done about 50, and I'm wondering if anyone has done
1000.
Moo, can you still get new tests from the server?
It stopped giving new test to my client after the third test. Can not get for the last 30 minutes.
It appears so.
I just got-
got proth test from server - k=24737, n=50983
If the server stops giving new tests to your client after
the 3rd test, I recommend that you close it and restart it.
I've done about 400, but most of them were with verly low N (<20K)Quote:
Out of curiosity, how many "secret" tests have you done?
I've done about 50, and I'm wondering if anyone has done
1000
I've had that problem a couple of time too, after a few dozen of tests the client apears to hang. Stop/start doesn't help. I have to exit the client and start it again.Quote:
It stopped giving new test to my client after the third test. Can not get for the last 30 minutes.
The client was somehow stuck at 5359*2^48150+1 and couldn't report it to the server.
Thx for the idea smh and Moo_the_cow. Unfortunately that didn't help too. But it made me think of another solution. I used the trick of changing the user name to some dummy name (which cleared the cache), then rechanged it to secret again, and restarted the client afterwards. It works just fine now.
BTW, the pending test distribution graphs at six of the k values does not show the DC tests. (10223, 19249, 27653, 28433, 33661, and 67607).
This is strange, because the server seems to distribute all k values except 55459 (which is normal, because it's lowest n available is still larger than our tests - 183214 vs. ~105000).
Anyway, this is not something curicial. Just wondered why.
Happy double checking.
Another thing i was wondering about. Will this 'secret' account only test the numbers which were done outside of SoB, or will this start double checking all N's?
I've done 400-500 tests yesterday, I think.
Sometimes the connection to the server was very slow, but that changed when we hit ~35K...
Already at 110K - you guys did a nice job over night. :cheers:
I've done over 800 tests so far. Currently at 155k.
What do you think the low end of the sob.dat file should be?Quote:
Nuri said
I'll rejoin DC sieving as soon as an updated SoB.dat (which excludes the lower n values) is available.
Since the current 'secret' check can be considered a double check of the low n values, it would seem logical to adjust the sob.dat such that the low value becomes the smallest original 'n lower bound'. Does this seem logical?
This being the case, the lowest n bounds were (I think)
k=24737, ~300000
k=27653, ~340000
k=67607, ~400000
k=10223, ~610000
If we therefore raised the floor of the sieve to 300K, then all remaining composites with a single test will be covered. It may however be more efficient to pick a higher value (say 600K), so that the sieving is more efficient. On the downside ~2000 remaining composites will be outside the sieve, but since these are all small n values, a PRP double check would be quite quick. On the upside, we will see a ~12% increase in speed.
Whatever we chose to do, I can generate a new sob.dat file with little effort.
All comments welcome.
Actually, this is the first time that SoB is checking these values! I think that we should continue with the current low end values at least until "secret" has checked all the values up to the point where SoB previously started. I'ts more efficient to eliminate values be sieving than by PRPing, so we will be helping out "secret" by continuing to check these low values.Quote:
Since the current 'secret' check can be considered a double check of the low n values,
By the way, we're still sieving faster than the 3M < n < 20M folks.
MikeH, I agree, at least as long as Louie doesn't change the lower bound for the daily results.txt file. In fact, even if he did, but updated the lowresults.zip file weekly we would be OK.Quote:
Originally posted by MikeH
...and the factors.
I'm thinking that beyond p=1T there is little value in posting the factors to this forum, since they can be resolved from the daily updated results.txt?
Correct, but these ranges were checked by previous searchers, that's why it should be considered as a 'double check' even if not in the truest sense.Quote:
Joe O said
Actually, this is the first time that SoB is checking these values!
As for where the low end of the sob.dat file should be - secret is now at 190000. Not sure exactly how quickly this is moving, but if the current secreters keep going, we might be at 300K by the weekend. So how about we lift to 300K when secret hits 300K, then discuss what we do after that?
That would give us a 5% speedup. Is it worth it?
If, for some reason, we had to come back and do 300 < n < 300000 it would add 32% to the effort.
When the other searchers "offer" their residues, I think it can indeed be considered a double check...
The problem is, the programs that tested these values previously did not produce residues.
Here is a comment by Louie in Proth tests completed / "secret" double-check thread.
But still, it's no big deal. These numbers are relatively very small, and could be checked again if any need arises in the future.Quote:
anyway, technically, the lower ranges of 12 remaining k values have never been checked by SB. We took it on faith that the previous searchers organized by Keller did them right. Most of them used proth.exe or old versions of prp that didn't produce residues... so there is really no way to verify their work without doing it again. However, these tests take very little time compared to current tests. It's kinda fun to watch my machine do a test a minute.
Not really, i think sieving for N < 300.000 is sufficient. At least for the biggest part of it. I don't know how many factors in a given time are found compared to the number of tests that can be done in the same time, but the smallest test take so little time that it's really not worth to spend a lot of time on it. Double checks shouldn't even be done with the SoB client, unless it is programmed to do so (shift some bytes so you use different values throughout the calculation and shift them back in the end to get a matching residue).Quote:
Originally posted by Joe O
If, for some reason, we had to come back and do 300 < n < 300000 it would add 32% to the effort.
The sieving effort is roughly proportional to the square root of the n range.
SQRT(3000000-342) ~ 17311
SQRT(3000000-300000) ~ 1643 ~ 95% of the previous line or a 5% savings.
SQRT(300000) ~ 548 ~ 32%
SMH, I agree with you that "sieving for N < 300000 is sufficient." That said, lets just do it now, once and for all.
MikeH, I agree with you that duplicate factors add nothing to the project. So if you produce an SOB.DAT file for 342 < N < 2999967 with those N removed that have had factors found since the last one was produced, I for one will use it.
I've been trying to factor the following low N so that we can shorten the range:
33661*2^432+1 P-1, P+1, and 336 ECM curves done on this one so far using GMP-ECM 5.0.
10223*2^509+1
67607*2^531+1
28433*2^553+1
5359*2^622+1
5359*2^886+1
24737*2^991+1
When those are done I've got a list of expressions where N < 3000 in order by N for which we haven't found factors yet. Does anyone want to pitch in? I've attached the list, one per line? Yes this is more effort than sieving, but if we atack them in order, we can shorten the sieving interval and gain some of the effort back.
This is exactly why I've chosen to propose 300,000 a couple of days ago. While I still think it might be 'a little early' to increase the lower range to an upper value like 600K, I see no reason not to push it up to 300,000.Quote:
Originally posted by MikeH
If we therefore raised the floor of the sieve to 300K, then all remaining composites with a single test will be covered.
Tests below 300K take a very short time. We're just a handful of people using secret, but even that will be enough to finish all the way up to 300K within a week.
This will take even shorter if we, for whatever the reason can be, want to check them once more in the future (faster client, faster boxes, and hopefully less ks). So, why bother about it?
And, don't forget, we've already sieved n<300K up to 1T. That means, we've already eliminated most of the candidates.
Well, in fact prping might be even more efficient for those low values.Quote:
Originally posted by Joe O
I'ts more efficient to eliminate values be sieving than by PRPing, so we will be helping out "secret" by continuing to check these low values.
By the way, we're still sieving faster than the 3M < n < 20M folks.
Here's a little calculation;
Mike found ~570 factors for 800G-900G, that means ~5.7 factors per G. I've not counted the exact number of factors below 300K, but assuming they are evenly distributed, and using the speed of my PC at DC sieving ranges (125,000p/sec) => 5.7 * (300 / 3,000) * 125,000 * 60 * 60 * 24 / 1G = 6.16 factors below 300K per day. (And don't forget, this figure will decrease as we go up in p values).
I'm taking the speed of a prp test @ n=150,000 as the average speed of prp tests below 300K. My PC finishes a 150,000 test in 6 minutes. => 1 test / 6 min * 60 * 24 = 240 tests below 300K per day
So, with the current settings that we have (SoB.dat up to 3 m, and p at 1T), we can say that a prp test is roughly 39 times more efficient than sieving for n<300K.
PS: Please feel free to correct me if I made any mistakes in assumptions and/or calculations above.
Sieving becomes more and more valuable and it makes more sense to sieve deeper as n (therefore the time to prp test a candidate) increases.
Yes, we're sieving roughly 2.4 times faster than the 3M < n < 20M folks, but this does not necessarily mean we are more effective. A factor there eliminates a prp test @ n=11,500,000 on the average, whereas a factor here eliminates a prp test @ n=1,500,000 on the average.
Well, think of it the other way around Joe. Why should we slow down our sieving effort by 5% going forward if it takes just a week to test every single remaining candidate below 300K?Quote:
Originally posted by Joe O
That would give us a 5% speedup. Is it worth it?
BTW, I also agree with that. We really do not need to post factors above 1T to the forum.Quote:
Originally posted by MikeH
I'm thinking that beyond p=1T there is little value in posting the factors to this forum, since they can be resolved from the daily updated results.txt? ?
Sorry, Joe, I didn't notice your post while writing mine.
So, it seems like we all agree on the changeover. Happy sieving everone.
Not quite, I haven't finished my range yet. I'm atQuote:
And, don't forget, we've already sieved n<300K up to 1T.
pmin=913924489217
Edit: I'd be happier if we got to 3T before truncating the range.
Oh, I see, sorry about that.
Then, it would be very kind of you to continue using our current SoB.dat file for that range. This way, we will know for future reference that 100<n<300,000 is sieved up to 1T.
EDIT: BTW, although I think we've sieved n<300K already enough, doing the changeover at 1T or 3T is no big deal for me. Agreeing on that we should do that changeover sometime is more important. Deciding exactly when to do those changes (and up to which n value) is a secondary issue.
"Agreeing on that we should do that changeover sometime is more important. "
I agree. Let's do the changeover. I'll finish my range with the current SoB.Dat file. There is one other range in progress
1000-1100 Moo_the_cow that I think should also continue with the current file. Do you think that we could get to 1500 (1.5T) before creating the new file? We should be able to do that before the end of the month (That's only 10 days away). Mike, would it be easier for you to create the file on a weekend? If so how about the 29th/30th no matter how far we've gotten?
quote:
____________________________________________
Do you think that we could get to 1500 (1.5T) before creating the new file? We should be able to do that before the end of the month
______________________________________________
I think that we can get to 1.5T, but only if I run this
2x checking sieving project 24/7 (or it may take as long
as 18 days to finish my range :eek: ) Currently, I'm
running the 2x sieve only in the daytime, and running
the other sieve overnight. However, if I do only this
sieve 24/7, I'll be delaying the other project, and some
guys are going to be pretty mad about it. I'm reluctant
to change my reserved range, since I feel that a lower
range is more productive.
Of course, all suggestions are welcome.
"However, if I do only this sieve 24/7, I'll be delaying the other project, and some guys are going to be pretty mad about it. I'm reluctant to change my reserved range, since I feel that a lower range is more productive."
Well I certainly don't want anyone to get mad. And there is absolutely no need for you to change your reserved range. Just keep on going the way you are and continue to use the SOB.DAT file that you are currently using until you finish your range. That way we will know that all the n < 3000 will have been sieved to 1.1T.
Now if we could get some "volunteers" to agree to do the range 1100 to 1300 with the current SOB.DAT file, I will volunteer to do 1300 to 1500 with the current SOB.DAT file. Then we will have all the n < 3000 sieved to 1.5T!
What B1 did you use for ECM (and p-1/p+1)? I remember i have done about 200x B1=250K on all of the above, so you should be running 1M curves now.Quote:
Joe O:
I've been trying to factor the following low N so that we can shorten the range:
33661*2^432+1 P-1, P+1, and 336 ECM curves done on this one so far using GMP-ECM 5.0.
10223*2^509+1
67607*2^531+1
28433*2^553+1
5359*2^622+1
5359*2^886+1
24737*2^991+1
The first of the numbers above can be done be done with Ubasic's NFSX. I wild guess, but i think a fast Athlon (or P4) can do this number in a day. If someone wants to give it a shot and needs help setting up the program (It's a bit more complicated then creating an 'in' file).
I will, but you don't seem to have them attached. I created one by hand for N's between 1000 and 3000. Did i miss any?Quote:
Joe O:
When those are done I've got a list of expressions where N < 3000 in order by N for which we haven't found factors yet. Does anyone want to pitch in? I've attached the list, one per line?
edit: I sorted the file by N myself and ran a bit of P-1 on it and already found a factor for 12 of the 66 numbers. I'll submit them in a minute.
I started at 1M. I've run the P-1 up to 85E7, the P+1 up to 11E7, and the ECM up to 3M.
I'm sorry, I meant to attach the file, but I must not have clicked on the attach button. Well I've selected it in the browse again, We'll see if it works this time.
Yes, I would appreciate some help setting it up. First of all where can I get UBASIC for Windows?
Edit: The following were on my list and not on yours:
55459*2^1018+1
55459*2^1030+1
55459*2^1054+1
55459*2^1306+1
55459*2^1498+1
55459*2^1666+1
55459*2^1894+1
55459*2^1966+1
55459*2^2134+1
55459*2^2290+1
55459*2^2674+1
55459*2^2686+1
Could you let me know which 12 N you've eliminated?
And last but not least, Congratulations on eliminating those 12 N!