PDA

View Full Version : Double Checking discussion (1<n<3M)



ceselb
03-15-2003, 10:16 AM
The older thread (http://www.free-dc.org/forum/showthread.php?s=&threadid=2552) is closed. Enjoy. :cheers:

Joe O
03-15-2003, 10:56 AM
All DoubleCheckers(1<n<3M),
Just a reminder, you can continue sieving, but don't submit yet. We (Louie, MikeH, Joe O. et al) are taking a checkpoint, to prepare some files for Louie.



Edit: Thanks Nuri! A good point, even though the thread title should alert people.

Nuri
03-15-2003, 11:19 AM
And, to avoid any confusion, this is for double check sievers only.

All others, please continue as usual.

Joe O
03-15-2003, 02:47 PM
Louie,
You asked for 11 files, 1 for each k, with 1 n per line, up to but not including the n bound: lower. Here they are.





if someone makes the 11 remaining text files n values of each k (except 55459 of course) that would help me. you should be able to tell what n value to end the files from the page http://www.seventeenorbust.com/stats/rangeStatsEx.mhtml under "n bound: lower". that basically tells the lowest n that SB tested for that k.

Moo_the_cow
03-15-2003, 03:08 PM
I won't be participating in this sieve double checking project,
but I suggest that you modify the SoB.dat file to sieve
100K<n<3M instead of 1<n<3M. I think that this will give
your project a 2% speed increase, and the numbers for
1<n<100K can be checked extremely fast by Louie.
(It took me just 4 min to check 1<n<10K for all the 12 k's)

Joe O
03-15-2003, 03:19 PM
I've checked the data I submitted, and you have successfully eliminated it from the file. I'm going to start using your new SOB.DAT file.

Again, I'm sieving but not submitting until after we hear from Louie.

I'm sure you have statistics, here are mine:

K, NLeft, NSeed, NLow, NHigh
4847, 11974, 8031, 2247, 2999967
5349, 14769,4315, 622, 2999710
10023, 13648, 2764, 509, 2999909
19249, 4930, 2170, 1166, 2991038
21181, 11666, 3152, 1148, 2999492
22699, 4766, 1437, 1414, 2997622
24737, 12175, 1204, 607, 2999887
27653, 6832, 750, 1257, 2999769
28433, 6582, 4445, 553, 2999425
33661, 11650, 2529, 432, 2999688
55459, 16475, 0, 1018, 2999938
67607, 4179, 570, 531, 2999547


NLeft - Number of N left to factor
NSeed - Number of N sent to Louie to seed the DataBase
NLow - Lowest N left to factor
NHigh - Highest N < 3M left to factor

jjjjL
03-15-2003, 06:00 PM
all numbers added. :)

only changes I made before adding the numbers was to remove tests with n < 1000 from the files (SB sometimes crashes with extremely low tests) and I removed these tests for k=4847

2000487
2000607
2000631
2000727
2000943
2001327
2001471
2001543
2001831
2002071
2002143
2002407
2002503
2002767
2003151
2003463
2003727
2004951
2005551
2005647
2005671

because each of these tests would have been assigned to regular users and not "secret" the way the server is setup right now. it is only 21 tests so there's a good chance I could just slip them in and they would finish before most users even noticed they had them but regular users may not be interested in doing double-check work now so i won't make them.

if secret actually burns though all the work it has now, which i think will take at least a few weeks, then i'll manually assign the above tests to myself (or someone who's interested) and do them just to patch any holes in the ranges. also, i highly doubt that those are prime... they were checked the first time by Samidoost and he posted residues for all of them. He'd be the last person i'd expect to miss a prime. ;)

anyway, you can submit factors for the double-check again now. and have fun doing a few dozen proth tests an hour for the next few days if you decide to join in on the "secret". :)

-Louie

smh
03-15-2003, 06:22 PM
anyway, you can submit factors for the double-check again now. and have fun doing a few dozen proth tests an hour for the next few days if you decide to join in on the "secret".

A few dozen an hour? Currentely a test doesn't even take a second :-) The client needs more time to connect to the server and request a new test at the moment.

Fun to see the N go up so fast. It was at 6000 a couple of minutes ago, now it's already at 9000. Unfortunately, things will slow down pretty soon.

How many tests are available for 'secret' (available and complete?)

jjjjL
03-15-2003, 06:27 PM
one little caveat for anyone who does secret testing in the next couple days, i recommend you run a sieve in the background too.

reason being the server will actually rate limit your downloading of work units because it will think you're finishing blocks too fast... so it may spend as much time waiting for workunits as it will actually running them.

what i'm doing is sieving a small range at idle priority and then running secret on SB with slightly higher priority. this gives 100% of the CPU to SB when it has a test and 100% to the sieve when it's in between tests waiting for the networking wait to time out.

i'd recommend a similar setup for those of you who feel inclined to do secret tests for the next couple days.

-Louie

Moo_the_cow
03-15-2003, 06:39 PM
quote:
_________________________________________
reason being the server will actually rate limit your downloading of work units because it will think you're finishing blocks too fast... so it may spend as much time waiting for workunits as it will actually running them.
___________________________________________

I ran the "secret" account for a while for the fun of it,
and I had to wait 4 min for just 1 WU :eek:
Is this normal, or is the server simply overloaded?

Mystwalker
03-15-2003, 06:40 PM
Just fired up secret. Running for two minutes, it took 7 seconds CPU time. :D
Strangely, the client utilizes max. 33% of the CPU. Maybe there are other bottlenecks while processing that "small" value?

jjjjL
03-15-2003, 06:46 PM
the rate limit is by user. the more people that are running secret, the more noticable it will be. i just noticed it got a lot worse here... expect it to not matter in a few hours i'd guess.

run a sieve in case secret gets banned/limited in that time.

-Louie

Mystwalker
03-15-2003, 06:59 PM
Just saw that the user stats for secret seem to be not filled out thoroughly...
As no date has been inserted, the script assumes the start date as 1-1-1970. :)
The profile doesn't look good either:

System error
error:_
no such user at /usr/local/apache/htdocs/sb/data/getProfile.mc line 28.

context:_
..._

24:_
</%args>
25:_

26:_
<%perl>
27:_

28:_
my $profile = ($m->comp('../db/select.mc',
29:_
tables => [ 'profiles p' ],
30:_
fields => [ 'p.userID', 'p.name', 'p.location',
31:_
'p.email', 'p.gender', 'p.age', 'p.comment' ],
32:_
where => [ 'p.userID = ?' ], params => [ $userID ]))[0]
..._



code stack:_
/usr/local/apache/htdocs/sb/data/getProfile.mc:28
/usr/local/apache/htdocs/sb/profiles/index.mhtml:26
/usr/local/apache/htdocs/sb/autohandler:44
/usr/local/apache/htdocs/sb/syshandler:18

One could think he's a mere phantom! :D


Update: Just reached n=40K :|party|:

Nuri
03-15-2003, 09:29 PM
Finally completed 450-500. Currently submitting the results. I'll post the dat file in sieve coordination thread.

I'm coming for the secret test as well. Good luck every secret user. :cheers:

BTW, at which point would you consider to a good time to update the SoB.dat? My vote is for n=300,000. This is soon enough, and also the lowest n lower bound was 300,127 (for k=24737). We might get another update when the double checking of the remaining k's finish.

What do you think?

Moo_the_cow
03-15-2003, 10:08 PM
I vote for n=400,000 mainly because the n lower bound for
k=67607 (which has the lowest proth weight) is 400,000

On the other hand, I vote for the SoB.dat file for the other
sieving project to update at n=5M

Out of curiosity, how many "secret" tests have you done?
I've done about 50, and I'm wondering if anyone has done
1000.

Nuri
03-15-2003, 10:14 PM
Moo, can you still get new tests from the server?

It stopped giving new test to my client after the third test. Can not get for the last 30 minutes.

Moo_the_cow
03-15-2003, 10:19 PM
It appears so.

I just got-

got proth test from server - k=24737, n=50983

If the server stops giving new tests to your client after
the 3rd test, I recommend that you close it and restart it.

smh
03-16-2003, 04:04 AM
Out of curiosity, how many "secret" tests have you done?
I've done about 50, and I'm wondering if anyone has done
1000

I've done about 400, but most of them were with verly low N (<20K)


It stopped giving new test to my client after the third test. Can not get for the last 30 minutes.

I've had that problem a couple of time too, after a few dozen of tests the client apears to hang. Stop/start doesn't help. I have to exit the client and start it again.

Nuri
03-16-2003, 11:27 AM
The client was somehow stuck at 5359*2^48150+1 and couldn't report it to the server.

Thx for the idea smh and Moo_the_cow. Unfortunately that didn't help too. But it made me think of another solution. I used the trick of changing the user name to some dummy name (which cleared the cache), then rechanged it to secret again, and restarted the client afterwards. It works just fine now.

BTW, the pending test distribution graphs at six of the k values does not show the DC tests. (10223, 19249, 27653, 28433, 33661, and 67607).

This is strange, because the server seems to distribute all k values except 55459 (which is normal, because it's lowest n available is still larger than our tests - 183214 vs. ~105000).

Anyway, this is not something curicial. Just wondered why.

Happy double checking.

smh
03-16-2003, 12:18 PM
Another thing i was wondering about. Will this 'secret' account only test the numbers which were done outside of SoB, or will this start double checking all N's?

Mystwalker
03-16-2003, 01:47 PM
I've done 400-500 tests yesterday, I think.
Sometimes the connection to the server was very slow, but that changed when we hit ~35K...

Already at 110K - you guys did a nice job over night. :cheers:

ceselb
03-17-2003, 10:46 AM
I've done over 800 tests so far. Currently at 155k.

MikeH
03-18-2003, 01:38 PM
Nuri said
I'll rejoin DC sieving as soon as an updated SoB.dat (which excludes the lower n values) is available.

What do you think the low end of the sob.dat file should be?

Since the current 'secret' check can be considered a double check of the low n values, it would seem logical to adjust the sob.dat such that the low value becomes the smallest original 'n lower bound'. Does this seem logical?

This being the case, the lowest n bounds were (I think)

k=24737, ~300000
k=27653, ~340000
k=67607, ~400000
k=10223, ~610000

If we therefore raised the floor of the sieve to 300K, then all remaining composites with a single test will be covered. It may however be more efficient to pick a higher value (say 600K), so that the sieving is more efficient. On the downside ~2000 remaining composites will be outside the sieve, but since these are all small n values, a PRP double check would be quite quick. On the upside, we will see a ~12% increase in speed.

Whatever we chose to do, I can generate a new sob.dat file with little effort.

All comments welcome.

Joe O
03-18-2003, 02:45 PM
Since the current 'secret' check can be considered a double check of the low n values,

Actually, this is the first time that SoB is checking these values! I think that we should continue with the current low end values at least until "secret" has checked all the values up to the point where SoB previously started. I'ts more efficient to eliminate values be sieving than by PRPing, so we will be helping out "secret" by continuing to check these low values.

By the way, we're still sieving faster than the 3M < n < 20M folks.

Joe O
03-18-2003, 03:00 PM
Originally posted by MikeH
...and the factors.

I'm thinking that beyond p=1T there is little value in posting the factors to this forum, since they can be resolved from the daily updated results.txt?

MikeH, I agree, at least as long as Louie doesn't change the lower bound for the daily results.txt file. In fact, even if he did, but updated the lowresults.zip file weekly we would be OK.

MikeH
03-18-2003, 03:28 PM
Joe O said
Actually, this is the first time that SoB is checking these values!Correct, but these ranges were checked by previous searchers, that's why it should be considered as a 'double check' even if not in the truest sense.

As for where the low end of the sob.dat file should be - secret is now at 190000. Not sure exactly how quickly this is moving, but if the current secreters keep going, we might be at 300K by the weekend. So how about we lift to 300K when secret hits 300K, then discuss what we do after that?

Joe O
03-18-2003, 03:48 PM
That would give us a 5% speedup. Is it worth it?

If, for some reason, we had to come back and do 300 < n < 300000 it would add 32% to the effort.

Mystwalker
03-18-2003, 04:41 PM
When the other searchers "offer" their residues, I think it can indeed be considered a double check...

Nuri
03-18-2003, 05:06 PM
The problem is, the programs that tested these values previously did not produce residues.

Here (http://www.free-dc.org/forum/showthread.php?s=&threadid=2714) is a comment by Louie in Proth tests completed / "secret" double-check thread.


anyway, technically, the lower ranges of 12 remaining k values have never been checked by SB. We took it on faith that the previous searchers organized by Keller did them right. Most of them used proth.exe or old versions of prp that didn't produce residues... so there is really no way to verify their work without doing it again. However, these tests take very little time compared to current tests. It's kinda fun to watch my machine do a test a minute.

But still, it's no big deal. These numbers are relatively very small, and could be checked again if any need arises in the future.

smh
03-18-2003, 05:25 PM
Originally posted by Joe O
If, for some reason, we had to come back and do 300 < n < 300000 it would add 32% to the effort.

Not really, i think sieving for N < 300.000 is sufficient. At least for the biggest part of it. I don't know how many factors in a given time are found compared to the number of tests that can be done in the same time, but the smallest test take so little time that it's really not worth to spend a lot of time on it. Double checks shouldn't even be done with the SoB client, unless it is programmed to do so (shift some bytes so you use different values throughout the calculation and shift them back in the end to get a matching residue).

Joe O
03-18-2003, 05:55 PM
The sieving effort is roughly proportional to the square root of the n range.
SQRT(3000000-342) ~ 17311
SQRT(3000000-300000) ~ 1643 ~ 95% of the previous line or a 5% savings.

SQRT(300000) ~ 548 ~ 32%

SMH, I agree with you that "sieving for N < 300000 is sufficient." That said, lets just do it now, once and for all.

MikeH, I agree with you that duplicate factors add nothing to the project. So if you produce an SOB.DAT file for 342 < N < 2999967 with those N removed that have had factors found since the last one was produced, I for one will use it.

I've been trying to factor the following low N so that we can shorten the range:

33661*2^432+1 P-1, P+1, and 336 ECM curves done on this one so far using GMP-ECM 5.0.
10223*2^509+1
67607*2^531+1
28433*2^553+1
5359*2^622+1
5359*2^886+1
24737*2^991+1
When those are done I've got a list of expressions where N < 3000 in order by N for which we haven't found factors yet. Does anyone want to pitch in? I've attached the list, one per line? Yes this is more effort than sieving, but if we atack them in order, we can shorten the sieving interval and gain some of the effort back.

Nuri
03-18-2003, 06:19 PM
Originally posted by MikeH
If we therefore raised the floor of the sieve to 300K, then all remaining composites with a single test will be covered.
This is exactly why I've chosen to propose 300,000 a couple of days ago. While I still think it might be 'a little early' to increase the lower range to an upper value like 600K, I see no reason not to push it up to 300,000.

Tests below 300K take a very short time. We're just a handful of people using secret, but even that will be enough to finish all the way up to 300K within a week.

This will take even shorter if we, for whatever the reason can be, want to check them once more in the future (faster client, faster boxes, and hopefully less ks). So, why bother about it?

And, don't forget, we've already sieved n<300K up to 1T. That means, we've already eliminated most of the candidates.


Originally posted by Joe O
I'ts more efficient to eliminate values be sieving than by PRPing, so we will be helping out "secret" by continuing to check these low values.

By the way, we're still sieving faster than the 3M < n < 20M folks.
Well, in fact prping might be even more efficient for those low values.

Here's a little calculation;

Mike found ~570 factors for 800G-900G, that means ~5.7 factors per G. I've not counted the exact number of factors below 300K, but assuming they are evenly distributed, and using the speed of my PC at DC sieving ranges (125,000p/sec) => 5.7 * (300 / 3,000) * 125,000 * 60 * 60 * 24 / 1G = 6.16 factors below 300K per day. (And don't forget, this figure will decrease as we go up in p values).

I'm taking the speed of a prp test @ n=150,000 as the average speed of prp tests below 300K. My PC finishes a 150,000 test in 6 minutes. => 1 test / 6 min * 60 * 24 = 240 tests below 300K per day

So, with the current settings that we have (SoB.dat up to 3 m, and p at 1T), we can say that a prp test is roughly 39 times more efficient than sieving for n<300K.

PS: Please feel free to correct me if I made any mistakes in assumptions and/or calculations above.

Sieving becomes more and more valuable and it makes more sense to sieve deeper as n (therefore the time to prp test a candidate) increases.

Yes, we're sieving roughly 2.4 times faster than the 3M < n < 20M folks, but this does not necessarily mean we are more effective. A factor there eliminates a prp test @ n=11,500,000 on the average, whereas a factor here eliminates a prp test @ n=1,500,000 on the average.


Originally posted by Joe O
That would give us a 5% speedup. Is it worth it?
Well, think of it the other way around Joe. Why should we slow down our sieving effort by 5% going forward if it takes just a week to test every single remaining candidate below 300K?




Originally posted by MikeH
I'm thinking that beyond p=1T there is little value in posting the factors to this forum, since they can be resolved from the daily updated results.txt? ?
BTW, I also agree with that. We really do not need to post factors above 1T to the forum.

Nuri
03-18-2003, 06:25 PM
Sorry, Joe, I didn't notice your post while writing mine.

So, it seems like we all agree on the changeover. Happy sieving everone.

Joe O
03-18-2003, 06:42 PM
And, don't forget, we've already sieved n<300K up to 1T.
Not quite, I haven't finished my range yet. I'm at
pmin=913924489217


Edit: I'd be happier if we got to 3T before truncating the range.

Nuri
03-18-2003, 06:52 PM
Oh, I see, sorry about that.

Then, it would be very kind of you to continue using our current SoB.dat file for that range. This way, we will know for future reference that 100<n<300,000 is sieved up to 1T.


EDIT: BTW, although I think we've sieved n<300K already enough, doing the changeover at 1T or 3T is no big deal for me. Agreeing on that we should do that changeover sometime is more important. Deciding exactly when to do those changes (and up to which n value) is a secondary issue.

Joe O
03-18-2003, 07:19 PM
"Agreeing on that we should do that changeover sometime is more important. "

I agree. Let's do the changeover. I'll finish my range with the current SoB.Dat file. There is one other range in progress
1000-1100 Moo_the_cow that I think should also continue with the current file. Do you think that we could get to 1500 (1.5T) before creating the new file? We should be able to do that before the end of the month (That's only 10 days away). Mike, would it be easier for you to create the file on a weekend? If so how about the 29th/30th no matter how far we've gotten?

Moo_the_cow
03-18-2003, 07:37 PM
quote:
____________________________________________
Do you think that we could get to 1500 (1.5T) before creating the new file? We should be able to do that before the end of the month
______________________________________________

I think that we can get to 1.5T, but only if I run this
2x checking sieving project 24/7 (or it may take as long
as 18 days to finish my range :eek: ) Currently, I'm
running the 2x sieve only in the daytime, and running
the other sieve overnight. However, if I do only this
sieve 24/7, I'll be delaying the other project, and some
guys are going to be pretty mad about it. I'm reluctant
to change my reserved range, since I feel that a lower
range is more productive.

Of course, all suggestions are welcome.

Joe O
03-18-2003, 07:50 PM
"However, if I do only this sieve 24/7, I'll be delaying the other project, and some guys are going to be pretty mad about it. I'm reluctant to change my reserved range, since I feel that a lower range is more productive."

Well I certainly don't want anyone to get mad. And there is absolutely no need for you to change your reserved range. Just keep on going the way you are and continue to use the SOB.DAT file that you are currently using until you finish your range. That way we will know that all the n < 3000 will have been sieved to 1.1T.

Now if we could get some "volunteers" to agree to do the range 1100 to 1300 with the current SOB.DAT file, I will volunteer to do 1300 to 1500 with the current SOB.DAT file. Then we will have all the n < 3000 sieved to 1.5T!

smh
03-19-2003, 03:40 AM
Joe O:
I've been trying to factor the following low N so that we can shorten the range:

33661*2^432+1 P-1, P+1, and 336 ECM curves done on this one so far using GMP-ECM 5.0.
10223*2^509+1
67607*2^531+1
28433*2^553+1
5359*2^622+1
5359*2^886+1
24737*2^991+1


What B1 did you use for ECM (and p-1/p+1)? I remember i have done about 200x B1=250K on all of the above, so you should be running 1M curves now.

The first of the numbers above can be done be done with Ubasic's NFSX. I wild guess, but i think a fast Athlon (or P4) can do this number in a day. If someone wants to give it a shot and needs help setting up the program (It's a bit more complicated then creating an 'in' file).


Joe O:
When those are done I've got a list of expressions where N < 3000 in order by N for which we haven't found factors yet. Does anyone want to pitch in? I've attached the list, one per line?


I will, but you don't seem to have them attached. I created one by hand for N's between 1000 and 3000. Did i miss any?

edit: I sorted the file by N myself and ran a bit of P-1 on it and already found a factor for 12 of the 66 numbers. I'll submit them in a minute.

Joe O
03-19-2003, 08:30 AM
I started at 1M. I've run the P-1 up to 85E7, the P+1 up to 11E7, and the ECM up to 3M.
I'm sorry, I meant to attach the file, but I must not have clicked on the attach button. Well I've selected it in the browse again, We'll see if it works this time.
Yes, I would appreciate some help setting it up. First of all where can I get UBASIC for Windows?

Edit: The following were on my list and not on yours:

55459*2^1018+1
55459*2^1030+1
55459*2^1054+1
55459*2^1306+1
55459*2^1498+1
55459*2^1666+1
55459*2^1894+1
55459*2^1966+1
55459*2^2134+1
55459*2^2290+1
55459*2^2674+1
55459*2^2686+1

Could you let me know which 12 N you've eliminated?

And last but not least, Congratulations on eliminating those 12 N!

smh
03-19-2003, 10:31 AM
I started at 1M. I've run the P-1 up to 85E7, the P+1 up to 11E7, and the ECM up to 3M.

You need to run 1100 curves with B1=1M to find most 35 digit factors
See the ECMNET (http://www.loria.fr/~zimmerma/records/ecmnet.html) page for mor information.


Edit: The following were on my list and not on yours:

Seems that i forgot to get all K=55459 numbers. I've added them to my list (found a factor for one of them already)


Could you let me know which 12 N you've eliminated?

A few more then 12 ;) I'll attach them (not all are submitted yet (can't get to that page) but i'll do that tonight).


Yes, I would appreciate some help setting it up. First of all where can I get UBASIC for Windows?

I can't find it online at the moment, but i can send you a slightly changed version of the program (only changes are to produce an output file with the found factors) tonight (NL time).

Is it okay to send a couple of hundred Kb to your e-mail add.?

Joe O
03-19-2003, 10:55 AM
Is it okay to send a couple of hundred Kb to your e-mail add.? Yes, it is. Please ZIP it if you can. Or ARC it, or ....
I don't know what my ISP's limit is, but we'll find out. Perhaps, you could break it up into convenient "Chunks".

As far as the iterations go, I'm using the following table:

The following table gives a set of near-to-optimal B1 and B2 pairs, with the corresponding expected number of curves to find a factor of given size (this table does not take into account the "extra factors" found by Brent-Suyama's extension, see
below).

digits D optimal B1 B2 expected curves N(B1,B2,D)
15 2e3 1.2e5 30
20 11e3 1.4e6 90
25 5e4 1.2e7 240
30 25e4 1.1e8 500
35 1e6 8.4e8 1100
40 3e6 4.0e9 2900
45 11e6 2.6e10 5500
50 43e6 1.8e11 9000
55 11e7 6.8e11 22000
60 26e7 2.3e12 52000
65 85e7 1.3e13 83000
70 29e8 7.2e13 120000

Table 1: optimal B1 and expected number of curves to find a
factor of D digits.

Important note: the expected number of curves is significantly smaller than the "classical" one we get with B2=100*B1. This is due to the fact that this new version of gmp-ecm uses a default B2 which is much larger than 100*B1 (for large B1), thanks to the improvements in step 2.

smh
03-19-2003, 02:33 PM
Joe, the only reason i asked is because you first wrote you had ran 336 curves, and later you said you were at 3M.

I sent a zip to the e-mail account i used the other day. It's 487Kb, so that shouldn't be a real problem.

I hope i made it clear how to use the program, otherwise just ask.

Let us know the results

Nuri
03-19-2003, 02:39 PM
Originally posted by Joe O
"However, if I do only this sieve 24/7, I'll be delaying the other project, and some guys are going to be pretty mad about it. I'm reluctant to change my reserved range, since I feel that a lower range is more productive."

Well I certainly don't want anyone to get mad. And there is absolutely no need for you to change your reserved range. Just keep on going the way you are and continue to use the SOB.DAT file that you are currently using until you finish your range. That way we will know that all the n < 3000 will have been sieved to 1.1T.

Now if we could get some "volunteers" to agree to do the range 1100 to 1300 with the current SOB.DAT file, I will volunteer to do 1300 to 1500 with the current SOB.DAT file. Then we will have all the n < 3000 sieved to 1.5T!

Ok. It's a deal. :D

Let's make the changeover at 1.5T.

Moo_the_cow (10xx-1100), Halon50 (1100-1300) and Joe O (1300-1500) will use the current SoB.dat file for their ranges, and all others will use the updated one.

Happy sieving all, and welcome back Halon50.

EDIT: BTW, I'm reserving 1500-1600. Will start when the updated file is ready.

MikeH
03-19-2003, 03:08 PM
and I'll reserve

1600 - 1700 MikeH

I'll have an upadted sob.dat ready within the hour.

EDIT - the updated sob.dat (http://www.aooq73.dsl.pipex.com/SobDat_n300K-3M.zip) is ready.

All the fators posted on the forum since the last update, as well as those in todays results.txt have been removed. The nmin has then been lifted to 300K.

The resulting file actually has nmin=300020, and nmax=2999967, since it is now optimised to the lowest and highest remaining n values.

I've also included a file sob.txt which has the factors 'clear'. This is just for information.

Joe O
03-19-2003, 06:11 PM
Originally posted by smh
Joe, the only reason i asked is because you first wrote you had ran 336 curves, and later you said you were at 3M.

I sent a zip to the e-mail account i used the other day. It's 487Kb, so that shouldn't be a real problem.

I hope i made it clear how to use the program, otherwise just ask.

Let us know the results

I've received your e-mail and unzipped the attachment. Your instructions are very clear. I will try them tomorrow, as it has been a very long day

Halon50
03-20-2003, 11:01 AM
Thanks for the welcome back Nuri!

I have a question for when the current range (1100-1300G) is finished. Do you still need the SoBStatus.dat files for this range in addition to normal submission to the database?

Joe O
03-20-2003, 11:20 AM
Halon50,
Normal submission of the results should be enough. You can post them here as well if you want. It would make my life easier, but would only be necessary if Louie changes the range of the daily results.txt file. At the moment his cutoff is 1T so your results will show there. Moo_the_cow and I, on the other hand need to post our results here since we are below the 1T mark.

Halon50
03-21-2003, 09:10 PM
Ok, thanks for the info; I'll compile and zip the results file tomorrrow when I'm a little more awake and post it in the other thread!

Joe O
03-24-2003, 07:39 AM
I did reply to your post in the other thread. Did you see it before it was deleted? Any way, there were 6 factors found in the range 9440 - 9450 for 1 < n < 3M. So far, I have found 10 factors for 3M < n < 20M. Currently at 9446900000000 and counting. What is the formula for the expected number of factors to be found?


Update: Now 12 factors, 9447160000000 and counting.

Update: Total 20 factors.

Nuri
03-24-2003, 12:47 PM
Thanks for the response Joe.

Your data for both of the sieves seems within the normal limits I would expect (although the one for DC sieve is slightly higher, and the one for the normal sieve is slightly less than my expected averages).

I was just wondering if the density of factors at double check sieve is somewhere close to 3/17 of the density of factors at normal sieve. I haven't worked on it yet, but my logic suggests that, since the candidates at DC are smaller, it should be slightly more than 3/17.

And also, wanted to get some clues to understand how deep of a double check sieve (and what p & nmin values for the following changeovers;)) would be reasonable.

Anyway, thx again for the feedback.

Halon50
04-05-2003, 03:22 PM
It's been two weeks now...any updates on when the range through 1.5G will be finished?

Nuri
04-05-2003, 05:33 PM
I guess it's a bit early. Yes, two weeks have passed, but we could come to 1.7T yet.

Here's a quick and dirty analysis that I made last week, to have a better guesstimate of the second changeover timing.

I don't have data for every entry in the table, so I made some assumptions, and checked with actual data where I have. Sieve and prp speeds are based on P4-1700, so results might vary significantly based on hardware.

Assumptions in the table:
- DC Sieve speed increases 1.5% as p doubles.
- Time to prp test a candidate increases proportional to the square of the increase in n.
- Number of factors per given range decreases linearly proportional to the increase in p value.
- Factors are distributed evenly within the n min and n max range.

I guess deciding on at which p value and up to what n min depends on how much of a conservative stand we want to take. Of course, the most conservative stand would be not to make any changeovers at all (which is still fine, as long as we're aware of the consequences of it and it's alternatives).

Please feel free to comment on and/or correct the calculations and assumptions of the table.

Regards,

Nuri

Halon50
04-05-2003, 09:02 PM
Errr, sorry, what I meant was, when will the new sieve file that has removed factors through 1.5G be up? I've been waiting for it before starting my machines on sieving again.

(Of course, if your graph answers exactly that, then I apologize!)

Nuri
04-06-2003, 03:44 AM
I see what you mean now, no need for apologies. As you know, there are two types of updates possible.

One is, that removes the found factors, but as far as I know, that does not affect the sieve speed. This was discussed and explained by Paul and Phil in the sieve client thread. As far as I understand, the effect of such an update is that you start to find less duplicates, but the number of unique factors does not change. I guess, this is why the SoB.dat file is not updated frequently in this sense.

Other one is the one I mentioned above. At a certain point, sieving some ranges becomes meaningless, as prp testing the candidates in that range becomes much faster per candidate. The benefit of such an update is that since the range becomes smaller after the update, the sieving speed increases, and this enables us to find more factors in the larger n candidates, as we can sieve deeper per given time and computing effort. My post was related to that, to have an idea on at what point (and up to which n min) it might be meaningful to have an update.

Anyway, Halon50, I'd be happy to hear that you continue sieving. Since we have a new version to the sieve client (http://www.free-dc.org/forum/attachment.php?s=&postid=25131) now, you will get %15 speed increase anyway.

MikeH
04-06-2003, 05:12 AM
Halon,

Are you looking for the revised sob.dat file where the bottom end of the n range was raised to 300K? If so, then it can be found in this post (http://www.free-dc.org/forum/showthread.php?s=&postid=24353#post24353).

The agreement was that 100<n<3M sob.dat would be used to sieve to p=1.5T, then 300K<n<3M sob.dat would be used from there onwards.

If you are looking for a newer version of the 300K sob.dat file, sorry there isn't one. We'll probably be running with this one for a while yet.

Halon50
04-07-2003, 12:47 AM
Aha, I got it; thanks for the explanations! I was waiting all this time for nothing it seems... :p

I'll pick up the 300k-3M file tomorrow and start my machines back up soon. Thanks again!

Nuri
04-07-2003, 01:22 PM
Happy sieving Halon.

MikeH
04-12-2003, 01:28 PM
I have generated an alternative sob.dat file (http://www.aooq73.dsl.pipex.com/SobDat%20n300K-20M.zip). This file covers the range 300K<n<20M, and therefore allows simultaneous main and DC sieving.

I accept Nuri's comment that this introduces another aspect of confusion, but for anyone that is interested, the file is now available.

When using this file, please reserve your range from the 3M<n<20M co-ordination thread, then post the same range on the 1<n<3M co-ordination thread. Obviously the same when you complete.

Again, do not feel in anyway obliged to use this alternative sob.dat file, if you are happy with how you currently work, keep doing just that, don't change.

Mike.

Nuri
04-12-2003, 02:51 PM
Thanks for the alternative file Mike. I'm switching to alternative SoB.dat wherever it makes sense.

MikeH
04-16-2003, 03:19 PM
Having seen today in the sieving stats that I have four excluded factors, I have been investigating what has gone wrong. I was really concerned that something was very wrong with the alternative sob.dat file I created at the weekend.

After some investigation, my worries were over. The sob.dat file I used as my base was one that was not fully sieved to 1G. As a result, there are candidates in the alternative sob.dat file that would have fallen out before p=1G, thus the excluded factors.:bang:

This means there are no problems with the results that will have been generated with this sob.dat file - no factors have been missed. However, I have now generated a new alternative sob.dat file (http://www.aooq73.dsl.pipex.com/SobDat%20n300K-20M.zip) (the old one is overwritten), which is one based on a 1G sieved sob.dat.

For information, the number of candidates have been reduced from 785315 to 719699 from duff to good sob.dat files

Sorry for any confussion this may have caused anyone else.

Mike.

jjjjL
04-17-2003, 01:53 PM
it should also be noted that i did a little sieving between 2.7 - 3 million before we reached the 3M barrier to eliminate more tests. that might also cause small factors to be duplicates. i don't really remember which ranges i factored though.

-Louie

Nuri
04-17-2003, 04:39 PM
Originally posted by jjjjL
it should also be noted that i did a little sieving between 2.7 - 3 million before we reached the 3M barrier to eliminate more tests. that might also cause small factors to be duplicates. i don't really remember which ranges i factored though.

-Louie

I did a quick analysis of the results.txt file and in case anyone is interested, here's the data of factors submitted by Louie for n<3m.

olaright
04-23-2003, 06:42 AM
This double sieving is really hard to understand for outsiders, visiting the project only now and then. Could someone please give a short explination on why double sieving is necessary. From the numbers it seems that all those ranges already have been sieved. Is it worth the effort? And how come so many new factors are found?

If you explain, please begin from the begining or else i will not understand :) And I think I am not the only one having problems with this.

Many Thanks,

Ola

Nuri
04-23-2003, 09:09 AM
Hi Ola,

To say it first, DC sieve is not double cheching the sieve, it sieves for double check. But I guess, this sounds strange, so I'll try to start fromthe beginning.

I'm sure you know most of the things I'll write below, but I hope they will help with the explanation of DC sieve.

As you know, the seventeenorbust project is testing for numbers of the form --- k * 2^n +1 --- to find out primes for each of the k values. The project started with 17 k values and as primes for 5 k values have been found in 2002, we are left with 12 k values now.

There are mainly two different procedures used for the project, namely sieving and PRP testing.

Sieving, as it's name implies, looks at the remaining candidates and eliminates the ones that are divisible by a smaller prime number. The candidates that are divisible by a smaller prime obviously can not be primes themselves, so they are eliminated from the set of candidates that should be PRP tested.

On the other hand, PRP testing takes each candidate one by one, and tests if it is a prime or not.

Sieving can not find primes, but it is very useful, because it can eliminate candidates faster than a PRP test can eliminate (still much faster at the p values we are sieving right now). So, it helps the project proceed faster, by eliminating candidates before they are PRP tested. In a way, sieving cleans up the way for PRP testing.

Now, the Sieve and DC Sieve part:

Our main sieving sub-project started a couple of weeks before the main project started PRP testing for values where n in the k * 2^n +1 formula exceeds 3 million. The candidates the main sieving tries to eliminate starts from k * 2^3,000,000 +1 and ends at k * 2^20,000,000 +1, for each of the 12 remaining ks.

On the other hand, DC sieve aims to decrease the number of candidates from k * 2^1 + 1 up to k * 2^3,000,000 + 1, in case we need to double check them by PRP testing in the future.

Here comes the question, were they not been sieved in the first place when the main project was testing for n for values smaller than 3,000,000?

Yes, they were sieved, but not very deep. The trick is here: Just a couple of months ago, the sieve client was much more primitive than the one were are using right now. Thanks to the admirable efforts of Paul and Phil (and lately Mikael as well) it's far much better now.

To compare the situation, when our main project was PRP testing for numbers n smaller than 3,000,000, the sieve client could only test for one k value at a time, whereas it can test for all 12 k values at the same time. Also, it was much much slower (more than 30 times) than our client now.

Therefore, despite Louie's computing efforts, they could not be sieved deep enough. With the new client, the DC sieve is now trying to reduce the number of candidates for n smaller than 3,000,000, in case we need to double check their PRP testing in the future.

Then, the question comes to mind, will we ever need to double check for those numbers? I'm not sure when we should start that, but I'm sure it will be worth the effort when the time comes.

So, I hope this explanation helped you.

Please feel free to ask if you have further questions.

And others, please feel free to add your comments / corrections.

Regards,

Nuri

Mystwalker
04-23-2003, 09:09 AM
Well...
It's no second sieve run, it's sieving for the double check run.
So the range was only searched by Louie so far. And sieving speed increased somewhat in the last month - I guess it's roughly 30 times faster now.
So it's possible to search a much greater range now. :)

olaright
04-23-2003, 11:09 AM
Thanks a lot Mystwalker and especially Nuri!!

:cheers:

Finally I understand what you are doing with "double sieving", and it is indeed useful (although sieving itself is much more important).
I have a slow PIII 450Mhz, but a permanent connection to the internet, so I do PRP testing. Currently it takes me about 14 days for 1 test. That's why sieving is so important (each potentially sievable candidate is 14 days waisted for me)! In the long run, we will find much faster a prime, if we only PRP test deep sieved numbers. That I wanted to state once more, so big thanks to the sievers.

And thanks again for the explanation!!!!

Ola

Nuri
05-04-2003, 09:58 PM
Here's the first graph for distribution of factors vs. p value at DC sieve.

As expected, the number of factors per G is 3/17 of the main sieve up to 1.5T, and 2.7/17 of it thereafter.

If the formula is right, from the ranges where we stand, it suggests:

A total of 1,600 new factors up to 5T,
A total of 3,800 new factors up to 10T (2,200 of which is for 5T-10T),
A total of 5,950 new factors up to 20T (2,150 of which is for 10T-20T),
A total of 9,050 new factors up to 50T (3,100 of which is for 20T-50T), and
A total of 13,400 new factors up to 200T (4,350 of which is for 50T-200T).

Going back to the graph:

The red and blue lines do not fit because the data shows the sieving effort by Louie (mainly for the range 2.7m<n<3.0m) when we were appraoaching n=3m on prp testing. If we ignored these factors, the two lines would fit perfectly, but I just wanted to show the where those factors stand relative to our current effort.

Next update for DC sieve graph will come when we finish everything below 5T.

Regards,

Nuri

MikeH
05-10-2003, 10:02 AM
In order to try to encourage more sievers to move to the 300K-20M sieve effort, and because the whole area of DC sieving is becoming blurred (since 0.6M of the 3M-20M area is also now DC), I am implementing a good suggestion by JoeO to normalise the scores. As a result, the sieve scoring (http://www.aooq73.dsl.pipex.com/scores.txt) will changed from

n < 3M, score = p/1T * 0.5 * ((n*n)/(3M * 3M))
3M < n < 20, score = p/1T
n > 20M, score = p/1T *0.05
duplicates score = score * 0.01

to

n < 300K, score = p/1T * 0.5 * ((n*n)/(300K * 300K))
300K < n < 20, score = p/1T
n > 20M, score = p/1T *0.05
duplicates score = score * 0.01

As a result anyone who has performed any sieving in the range 100<n<3M will see there score increase (a little) tomorrow. :mouserun:

MikeH
05-11-2003, 09:54 AM
to

n < 300K, score = p/1T * 0.5 * ((n*n)/(300K * 300K))
300K < n < 20, score = p/1T
n > 20M, score = p/1T *0.05
duplicates score = score * 0.01 and another minor change resulting in

n < 300K, score = p/1T * ((n*n)/(300K * 300K))
300K < n < 20, score = p/1T
n > 20M, score = p/1T *0.05
duplicates score = score * 0.01

Moo_the_cow
05-11-2003, 06:24 PM
Thanks for the stats scoring change, Mike, it improved my score by more than 200 points :D
Anyway, it occurred to me that since the "secret" account is already done checking 3 of the 12 k's, the SoB.dat file (for DC sieve) should only contain the 9 k's that are not yet completely checked by "secret". It appears that changing the DC sieve to only include 9 k's instead of 12 will result in a 25% speedup.
Well, what do you guys think?

Mystwalker
05-12-2003, 05:04 PM
Well, what do you guys think?

Depends on the aim of the DC - when is goes further than checking the computations of the "prior" projects (which I'm almost sure of), then it should continue.

Btw. is there anyone running DC-only with the new client on a fast PC? There should be some astronomic kp/sec values. ;)

Joe O
05-12-2003, 05:23 PM
How does 95999 p/sec grab you? PIII/500 Win98SE


90278 p/sec Celeron/450 Win NT 4

Nuri
05-12-2003, 05:32 PM
Well, what do you guys think?

I think we should not remove any k values up until we find the primes for those ks.




is there anyone running DC-only with the new client on a fast PC? There should be some astronomic kp/sec values.

I tried that with my PIII-1000 @ work for one of my DC patches at qround 19T. Unfortunately it slowed down from ~260k p/sec to ~225k p/sec. I don't think so, but may be I did something wrong (it was late, and I was tired of working). I'll try again tomorrow and let you know the result.

Mystwalker
05-12-2003, 06:01 PM
How does 9599 p/sec grab you? PIII/500 Win98SE

Sounds a bit low - typo?

Nuri
05-12-2003, 06:22 PM
Unfortunately it slowed down from ~260k p/sec to ~225k p/sec.

That's really strange. It's the same case for my PIV-1700. Speed at DC dropped from 160k to 137k.

BTW, it shows 32% increase for alternative SoB.dat (from 72k to 95k).

PS: All tests are for v1.28 vs. v1.32.

Anyone else tried 1.32 at DC sieve?

Joe O
05-12-2003, 06:58 PM
Originally posted by Mystwalker
Sounds a bit low - typo?

Yes, it was a typo!

95999 p/sec PIII/500 Win98SE 5.5T for the p range


90278 p/sec Celeron/450 Win NT 4 3.3T for the p range


Both of these are for the lower range sieving, i.e. n < 3M

Joe O
05-13-2003, 12:32 PM
I'm going back to V1.30, at least until I have time to play with the alpha setting.

103593 p/sec PIII/500 Win98SE 5.5T for the p range for V1.30
97576 p/sec was the best for V1.32 and not even that for v1.33

Joe O
05-13-2003, 11:39 PM
I'm going back to V1.28 for now.

113232 p/sec V1.28
108353 p/sec V1.30
92915 p/sec V1.32
90278 p/sec V1.33
Celeron/450 Win NT 4 3.3T for the p range
This is for lower range sieving, i.e. n < 3M

MikeH
06-08-2003, 05:10 PM
I have been tweaking the parameters on the gap finder for low n. We now have everything in 10T strips (a la regular sieving), and the lower 1.5T as well (from here (http://www.aooq73.dsl.pipex.com/index.htm))

While sorting out the 0-1.5T area, I found a gap of 0.24G right down at 2.69G. I quickly sieved this with the current low n sob.dat - no factors. I then did the same with a 1G sieved sob.dat, and found 591 factors (all reported as new at submission). I've checked, and 563 are new unique factors.

Since these factors were not present in the current sob.dat file, this means I used then to build the sob.dat, but somehow failed to submit them (I did have thousands to do, but that's no excuse). Sorry. :blush:

With that sorted, I am now intrigued as to why the sieve stats (from tomorrow) show a difference of about 300 factors per million n in the p<3T column when compared with n>3M. Strange. Since Nuri is still going on the range 2500-3100, I guess some of those 300 could be in there (even though I show no gaps). Hope so.

Mike.

EDIT: Taking a detailed look at the current submissions, it appears Nuri still has 2.762T-3T to go. I'm not sure this will give us (300 * 2.7) factors, but it should be a reasonable number. The gap finder doesn't show these gaps because (I think) Louie covered this area when PRP had n<3M, but for a very narrow n (say 2.5-3m).

Nuri
06-09-2003, 04:06 PM
Mike, you're right. I still haven't finished my 2500-3100 range, and the factors in my range were the ones previously been found by Louie.

I was away from the pc sievig that range for the last couple of days. I guess the remaining part (2762 - 3000) is about to finish today. I'll take a look and submit the factors tomorrow. But, my estimate is that there should be roughly 300 factors there. So, there is still 500 factors missing.

Let me take a look at possible gaps below 3T too. I'll let you know if I find something.

EDIT: Mike, my internet connection is very bad for the last couple of weeks since the earthquake and I could not download lowresults file. But I found a version of it that I previously downloaded, and checked for holes there. There are four instances where the density of factors are abnormally low. These might have been patched later, but since they are very small ranges (1.1G in total), I guess it's worth resieve.

Mike, could you please check these out (especially the first one).
1.486G-1.619G (or 1.48-1.62)
4.769G-4.803G
49.957G-50.576G
55.668G-56.019G

PS: 1T-3T seems hole free.

Joe O
06-28-2003, 08:23 AM
I've copied these from the coordination thread:

Originally posted by biwema
sorry, i installed the 300k-20m file later, so only 33050-33200 and 33270-33300 in the 33000-33300 range contain the factors of exponents smaller than 3M.

also be careful about the accepted gap between 33190 and 33280. it is not tested between 33200 and 33270.

biwema

quote:
--------------------------------------------------------------------------------
Originally posted by chvo
22500-22550 chvo [complete]

I found 7 factors (using the original doublesieving SoB.Dat, not the one that starts at 300K). Is that amount expected, or is it low?

chvo

Joe O
07-10-2003, 01:02 AM
I've copied this from the coordination thread:


Originally posted by MikeH - Edited by Joe O
After some analysis of the results.txt file (new scoring is almost ready), I believe the following reservations / completions for 300K - 3M have been forgotten.

14700-15000 priwo [complete] - Done with 3M-20M
15600-15700 Slatz [complete]
15700-15850 cmprince [complete]
17400-17420 alexr [complete] - Done with 3M-20M
21300-21320 Titalatas [complete]
22710-22800 cmprince [complete]
39500-39538 geeknik [complete]

Could the users check and confirm what the n range on their sob.dat files was for these ranges. Thanks.
Mike.

ceselb
07-10-2003, 10:27 AM
Moved to the sieve section of the forum.

ceselb
10-24-2003, 06:25 PM
Moved from the coordination thread.


Originally posted by chvo
To Keroberts1:
this isn't the place to discuss that, but in short: there have been a lot of PRP tests, but some of these tests will surely have returned wrong results. We double-sieve to reduce the number of candidates that will be retested (to check the results of the previous PRP tests).

larsivi
12-27-2003, 09:22 AM
I saw that quite a lot of new supersecret tests have been added. Won't it be of interest to run n < 1M sieving until no further tests (or at least just a few tests left) for n < 1M has been left undoublechecked?

Keroberts1
12-27-2003, 02:54 PM
These tests are very easy to test and for the most part have a very low error rate. In fact these test don't need to be redone at all because it is very unlikely that we have missed a prime there and the goal of our project is not to find the smallest prime but to find a prime for each K. It would in most cases be easier to tests larger numbers and hope to find a prime there. Eventually when the main effort gets far enough ahead of the double check it will once again be beneficial to do the double check again. However, for the range between 300000 and 1000000 we have hnot yet found a single test that was reported wrong. This leads me to believe that it would not be worth the effort to retest them when it appears there is almost no chance of finding a prime there. And anyways DC sieving is much less efficient than regular sieving. Eliminating a single test in regular sieving is like eliminating almost a hundred of the 500000 range tests. To me it just doesn't make sense to spend the resources to sieve it out any more. I dn't even think we should be sieving 1million to 5 million range. However because I've been told the speed gain from that would be very small i use the 1-20 million dat. Adding 300000 to 1 million to that would probably make almost no differance in the speed, but would offer almost no benefit to the project.

larsivi
12-27-2003, 05:57 PM
Keroberts1 wrote:

I dn't even think we should be sieving 1million to 5 million range. However because I've been told the speed gain from that would be very small i use the 1-20 million dat. Adding 300000 to 1 million to that would probably make almost no differance in the speed, but would offer almost no benefit to the project.

I wasn't talking about the main .dat file, but to continue the 300K-3M sieving a little while longer. Just wondering whether it's worth it, really.

Keroberts1
12-27-2003, 11:14 PM
in my opinion no because the same effort could be used to sieve a higher range the first time and this would find more useful factors. Perhaps there is some benefit to sieving 300000 to 3000000 but it is much smaller than the benefit of sieving wit hthe main effort.

cedricvonck
02-27-2004, 05:43 PM
54450 54500 Range Done - 6 factors found & submitted via the sieve page.

I think I have done something wrong with the sieve page :blush: :blush:

factors found:
54458662105433 | 33661*2^564648+1
54467953065341 | 55459*2^2230798+1
54472098648923 | 5359*2^949302+1
54472893350251 | 24737*2^606727+1
54478211796511 | 22699*2^1154278+1
54497220425873 | 4847*2^1575231+1


Thnx!

cedricvonck
02-29-2004, 09:52 AM
42600 42650 Range Complete => 6 factors found (submitted via the sieve page)

42602795068649 | 22699*2^1609678+1
42606060519859 | 55459*2^1492510+1
42606124542061 | 28433*2^417553+1
42631715602241 | 67607*2^1968651+1
42638849820577 | 22699*2^1836838+1
42646326562783 | 55459*2^2448058+1

ceselb
03-02-2004, 12:23 PM
What happened? Was it something like this (http://www.free-dc.org/forum/showthread.php?s=&threadid=5714) ?

cedricvonck
03-02-2004, 12:34 PM
No it wasn't that....
I logged in
username
password

I waited 2 - 5 minutes (I saw preferences)

When I returned

I submitted the results

Then I saw Log - IN instead of preferences

:confused:

ceselb
03-02-2004, 02:00 PM
well, I don't know then. Try resubmitting today and tomorrow. If it doesn't work by then, contact louie (jjjjL (http://www.free-dc.org/forum/member.php?s=&action=getinfo&userid=441) on the forums)

cedricvonck
03-02-2004, 03:01 PM
ok thank you.
IMHO, I think that there was (is) a cookie problem???

0 factors were new.
12 results verified.

Nuri
03-02-2004, 08:44 PM
Originally posted by cedricvonck
No it wasn't that....
I logged in
username
password

I waited 2 - 5 minutes (I saw preferences)

When I returned

I submitted the results

Then I saw Log - IN instead of preferences

:confused:

The same also happened to me twice within the last week. But I later checked for the factors I submitted (through Mike's individual stats page), and everything seemed fine.

In fact, it seems fine for you too. All 12 factors you mention above seems submitted. Well, unluckily, two of them at 54450 - 54500 range were duplicates, but that's another issue.


On the other hand, it would be much better if this issue with logging in was resolved. It really makes one feel like something is wrong.