PDA

View Full Version : Sieve coordination discussion



Keroberts1
06-14-2004, 12:10 PM
this is horrible these were found by 4:13 pm yesterday but not submitted until now I'm a few houras late.

450034195420763 | 33661*2^7544760+1
450077622239711 | 33661*2^12431160+1
450092349108361 | 27653*2^5865873+1
450093208218179 | 4847*2^19462671+1
450521306741677 | 67607*2^9058811+1
450557372899177 | 55459*2^6366298+1

vjs
06-14-2004, 12:20 PM
I'm in the same situation

Just started sieving (Newbee) and I have three files:
fact
factexcl
factrange

Not totally 100% sure what they are etc

But fact has a k/n pair of n=6.45m I don't think I'll finish the range before we reach 6.45m can we simply submit factors before the range is done???

If you can pre-submit which file contains the one we need to submit etc.

Joe O
06-14-2004, 12:57 PM
Originally posted by vjs
I'm in the same situation

Just started sieving (Newbee) and I have three files:
fact
factexcl
factrange

Not totally 100% sure what they are etc

But fact has a k/n pair of n=6.45m I don't think I'll finish the range before we reach 6.45m can we simply submit factors before the range is done???

If you can pre-submit which file contains the one we need to submit etc.

You can submit them as you go, I do. The fact.txt file is the one to submit. Just keep track of which ones you have submitted, so that eventually you submit them all.

vjs
06-14-2004, 01:02 PM
Joe I did as you said but I also submitted factexcl

The database said 22 new factors verified and accepted????

562901010463619 | 24737*2^1514407+1
562902758879399 | 10223*2^3798137+1
562902807708023 | 67607*2^1627955+1
562902827483363 | 33661*2^11136072+1
562904037724211 | 19249*2^16016558+1
562904473884997 | 33661*2^5351616+1
562905183512027 | 10223*2^15699641+1
562908456832559 | 28433*2^7558201+1
562911958989661 | 19249*2^17539502+1
562914314479537 | 4847*2^18026559+1
562914516462629 | 28433*2^4348081+1
562919384671031 | 22699*2^16826446+1
562919391957341 | 33661*2^8934024+1
562922843517563 | 10223*2^15113021+1
562923979286951 | 33661*2^14977296+1
562924049930783 | 21181*2^18544796+1
562926713563607 | 55459*2^4042534+1
562928171203031 | 10223*2^9530921+1
562932700877003 | 67607*2^3545627+1
562933349297351 | 55459*2^10552894+1
562934652822127 | 55459*2^8704054+1
562936448049823 | 55459*2^7839202+1

along with

562906036826687 | 28433*2^2890417+1

from fact

Troodon
06-14-2004, 01:13 PM
· fact.txt - Here go the new factors.
· factexcl.txt - Duplicate factors (there is already another factor for that k/n pair but at another p).
· factrange.txt - Here go the factors found outside the n range (1 M<n<20 M).

- You should submit ALL the factors from fact.txt.
- From factrange.txt you should submit those with n<1 M, as they can be very usefull for the double-checking accounts. As currently only factors with n<20 M are accepted, you don't have to search for n<1 M and you can select all the factors and submit them. Or you can use this program (http://www.mystwalker.de/LowNFinder.zip) by Mystwalker.
- There isn't any need to submit the factors from factexcl.txt.
- And the most impotant thing: you don't have to wait to finish the range you picked up. Evenmore, every time a factor with n within the current PRP active windows (see the stats (http://www.aooq73.dsl.pipex.com/scores_p.htm)), or close to them, is found, you should submit it ASAP.
- http://www.seventeenorbust.com/sieve/ is the page for submitting factors. You just have to copy them and press the submit button. Don't forget to be logged in! :)
- If you're on Windows, you can also use SoBistrator (http://n137.ryd.student.liu.se/sob.php) by mklasson to monitor your sieve (range, rate, factors, etc.) and automatically submit the found factors.

Joe O
06-14-2004, 07:05 PM
"(there is already another factor for that k/n pair but at another p)" This is exactly correct.

But "· factexcl.txt - Duplicate factors ."
This is not quite right. These are not the same as MikeH's duplicate factors. They could be, but they are also most likely to be what MikeH calls excluded factors. It depends on whether the originally found factor was or was not below 1G "Excluded factors (those factors not present after sieving 100<n<20M to p=1G) "

"The database said 22 new factors verified and accepted????" Factors are accepted, as long as they are within the stated bounds, and have not been submitted before. The sieve submission script only considers them to be "new" as long as that P K N triple has not already been submitted. So your 22 factors were accepted. But when you look at , your stats (http://www.aooq73.dsl.pipex.com/ui/5202.htm) on MikeH's page you will only see the unique factors. Later on, when MikeH manually adds your reservation range, you will see the counts for duplicate and excluded points.

Death
06-15-2004, 03:27 AM
Originally posted by Ken_g6[TA]
I'm a newbie to sieving, and I'm running it on a really slow and rarely used laptop, so I don't really want to reserve much.

470001-470002 KenG6 (ETA: mid July)

Ken, whats your crunchin speed???

Nuri
06-15-2004, 03:11 PM
Originally posted by Joe O
"The database said 22 new factors verified and accepted????" Factors are accepted, as long as they are within the stated bounds, and have not been submitted before. The sieve submission script only considers them to be "new" as long as that P K N triple has not already been submitted. So your 22 factors were accepted.

Not exactly, I guess. As far as I know, the submission page first checks if the submitted line p | k*2^n+1 contains an actual factor or not (i.e. if k really divides k*2^n+1). If so, it is verified. If not, it's not verified.

If that specific p | k*2^n+1 was previously submitted to the database, then it is not a new factor.

Regardless of being duplicate or not, if the factor is verified and it is submitted for the first time, then it is a new factor.

p | k*2^n+1 becomes a duplicate if there was another p submitted, which divides that specific k*2^n+1.

There might be different types of duplicates.

If a factor for the specific k*2^n+1 was found before the sob.dat file you are using was created, then the client knows it's a duplicate (i.e. n is within 1m-20m limit, and the related k/n pair does not exist in the dat file). So, it is dumped into the factexcl.txt file. (there might be other cases of dumping into the factexcl.txt file, but I guess this is one of the dominant cases).

If a factor for the specific k*2^n+1 was not found before (but found after) the sob.dat file you are using was created, then the client thinks it is a first time factor, although it is not. Such a factor also passes submission page without problem, but Mike's database catches it as duplicate, because it knows the other factor for that k/n pair, and when it was submitted.

Since the latest sob.dat was created after a deep sieving point, this can be considered as a rare occurance (roughly 1.2% of the factors thought of as unique by the client are duplicates. This figure was as high as 10%-15% before the last update on sob.dat).

ceselb
06-16-2004, 09:31 PM
Originally posted by Death
Ken, whats your crunchin speed???

Oops, deleted that post before I saw this. He's on a laptop that doesn't get used much.

engracio
06-28-2004, 02:48 PM
ceselb,


Just wondering if somebody has already done this range 322500-323000. The reason I am asking is because the first four (4) factors I found said it is a duplicate. Am I just resieving this range? Thanks.


e

larsivi
06-28-2004, 03:51 PM
Originally posted by engracio
ceselb,


Just wondering if somebody has already done this range 322500-323000. The reason I am asking is because the first four (4) factors I found said it is a duplicate. Am I just resieving this range? Thanks.


e

A duplicate isn't really a duplicate factor, but means that some other factor has been found for that k/n pair before. This happens fairly often.

engracio
06-28-2004, 04:01 PM
larsivi,



A duplicate isn't really a duplicate factor, but means that some other factor has been found for that k/n pair before. This happens fairly often.


Coolest, Thank you sir. Off I go then.:)


e

Nuri
06-28-2004, 09:24 PM
The chances of 4 in a row to happen is really low (A rough estimate would be 1 in 4 million, i.e. assuming 2.26% of factors are duplicate currently). Still, a possibility of course.

Another (more likely) possibility might be, perhaps (for some reason), you've submitted them twice (like, reloaded the page thinking that it did not work etc.), and did not notice the first occurance of submission results.

Also, there are other possibilities as well.

It is possible to determine if the factors were duplicate or not (or if they were submitted by somebody else before).

You'll find Mike's pages (http://www.aooq73.dsl.pipex.com/) very helpful.

If you want, we can help you sort out what the actual case was. Just post your four factors below.

Mystwalker
06-29-2004, 06:54 AM
A lot more factors are in factexcl.txt than in fact.txt - it is very likely that 4 excluded factors occur before a needed one...

Nuri
06-29-2004, 01:13 PM
322500878487647 55459 9934462 7192 30 e
322501126469329 22699 18540982 7192 30 d1
322501193034653 10223 8420801 7192 30 e
322501438517519 10223 16277129 7192 30 e


So, you submitted the factors under factexcl.txt as well, right?

Then, it's normal.

engracio
06-29-2004, 03:29 PM
Nuri,


Thanks for staying on top of this.:) :) :)



"322500878487647 55459 9934462 7192 30 e
322501126469329 22699 18540982 7192 30 d1
322501193034653 10223 8420801 7192 30 e
322501438517519 10223 16277129 7192 30 e


So, you submitted the factors under factexcl.txt as well, right?

Then, it's normal."


As I read the forum and get more factors on the ranges that I have reserved, the more I understand how it is suppose to work. Unfortunately, yesterday and this morning I have submitted and accepted more factors from the factexcl.txt. 3 factors from the fact.txt was submitted also.


Now, I am kind of confuse on which data result to send. I know the fact.txt must be sent ASAP, but what about the other two files factexcl.txt and factrange.txt

If somebody can point me to the right area, I would be most thankful.


e

Troodon
06-29-2004, 04:58 PM
Originally posted by engracio
Now, I am kind of confuse on which data result to send. I know the fact.txt must be sent ASAP, but what about the other two files factexcl.txt and factrange.txt

If somebody can point me to the right area, I would be most thankful.

Please see my previous post in this thread.

Nuri
06-29-2004, 05:31 PM
Short story:

You can forget about factrange.txt and factexcl.txt. Their contribution is infinitesimal (if not none).


Long story:

Think of fact.txt as the actual product, and the other two as some by-products.

The factors we are aiming to find are stored in fact.txt. At current sieve levels, and with the sob.dat we're using, 97.6%* of the factors that will be written at fact.txt will be unique (first factors of a k/n pair, where k = one of the 11 left, and 1m<n<20m).

* Calculation of 97.6%:
Currently, there are 547,950 k/n pairs without a factor, for 1m<n<20m and k = one of the 11 left (this 547,950 figure decreases at a speed of ~55 k/n pairs per day. See Mike's project stats page).

At the time our new sob.dat was created, there were 561,291 of those.
So, when the client finds a new factor, and if it divides one of the 561,291 k/n pairs, then the client assumes this is a first time factor for that k/n pair. However, we've already found factors for some of those pairs since the sob.dat was created. Assuming random distribution of factors among k/n pairs, 97.6% (=547,950/561,291) of the factors written in fact.txt today will be the first for their k/n pairs, and the remaining will be duplicates.


factexcl.txt stores factors that the client already knows are duplicates (i.e. a factor was already found for their k/n pair when the sob.dat we're currently using was prepared). If you consider that there would have been much more k/n pairs for 11 k if there was no sieving at all, it's easy to see why there are much more factors written at factexcl.txt. In fact, the factexcl.txt file on my PC currently has 42 times more factors than the fact.txt. Unfortunately, they're useless as far as the project's goal is concerned (i.e. that specific k/n was already taken out of the pool).


factrange.txt is another story. It stores the factors which the client catches by chance, for n<1m and for n>20m.

The factors where n>20m have little (or no) value for the project. There are two reasons: Firstly, when PRP effort becomes closer to 20m, we'll have to start sieving for n>20m from the beginning anyway (so, it will not save any sieving effort). And secondly, it's highly likely that most of those k/n pairs will be eliminated at the very beginning of the new sieve effort anyway.

The factors where n<1m might still have some value. PRP double check (second-pass) effort is currently at n=820,000. So, if your lucky factor has an n value between 820,000 and 1,000,000, and if it is not a duplicate, then it will save a second-pass PRP test. Roughly speaking, there are 5,500 tests left below n=1m. Please also note that, a PRP test of this size takes only a few hours on an average PC. And a last note, you should also consider that this is a narrowing window (as second-pass approaches 1m), and at current speed it will reach n=1m within two months or so. So, two results: This is a timed possibility. And, the chance of a factor at factrange.txt to hit the desired range gets smaller and smaller as second-pass proceeds.

Apart from the benefit I mentioned above, the only other benefit that comes to my mind is: If a relatively less active user reserves a range, and does not show up at the forum for a long time, we'll know what portion (to what extend) of his range he sieved more precisely, if he left some footprints via the excluded and duplicate factors he submitted (remember, they are much more frequent). Thus, the decision to resieve his range or not (or to what extend) would be easier.


Wrap up

There would be no harm in dumping the contents of factrange.txt and factexcl.txt(except the fact that doing so inflates results.txt file and costs some bandwith for Mike and P-1 factorers).

Still, the contribution of doing so is not very significant. So, act as you wish in that respect.

engracio
06-29-2004, 07:36 PM
Troodon



Please see my previous post in this thread.

It's not because I did not see your post, its just seem to be a lot of conflicting info. As normal I is compuse.


Nuri



There would be no harm in dumping the contents of factrange.txt and factexcl.txt(except the fact that doing so inflates results.txt file and costs some bandwith for Mike and P-1 factorers).

Thanks for the long and short. I think I will just upload the fact.txt unless requested otherwise.:) :) :)



e

Death
06-30-2004, 03:10 AM
well, that's your decision, but I prefer to upload all three files.

more information is better than lack of information.

engracio
06-30-2004, 09:22 AM
I think I will just upload the fact.txt unless requested otherwise.


Thanks Death, like I said unless requested by people that be I won't. Unlike you I believe pertinent information is better than the whole enchilada. Barely able to eat a burrito let alone the whole enchilada.:) :) :)


e

vjs
07-05-2004, 10:32 AM
I always upload all of the files simply b/c it shows mike that a range has been complete... I think. If for example you were to seive for 500G and not find a factor in fact.txt, highly unlikely, but possible. However if other factors were submitted Mike should see the progress of the range.

engracio
07-05-2004, 12:15 PM
SOLD!!!!! for 5 cents. Since mike is not saying anything (thinking it does not matter to him either way, better safe than sorry) I will upload the other two files as I complete the range. Tha fact.txt is uploaded as soon as it is found. Thanks for the info, Death guess you were correct. :) :) :)



e

vjs
07-06-2004, 10:57 AM
engracio,

The other thing that is important to check for are those low out of range values.

Anything >830K right now is benifital, however if it's less than 1m it goes in the out of range file :confused: . Hopefully within the next 5 months this won't be the case since supersecret should get to 1m by then.

Anyways I guess I feel as though I'm pretty much alone in supersecret with only 40 results per day, wish I could send those scores to my home team ARS, but regardsless it's being done none the less.

Perhaps everyone could submit their out of range results especially those >830K, probably knock at least a day or two worth of test out.

Remember even if you have already reported the range as finished you could still submit the out of range results.

Mystwalker
07-07-2004, 10:00 AM
Originally posted by vjs
Anything >830K right now is benifital, however if it's less than 1m it goes in the out of range file :confused: . Hopefully within the next 5 months this won't be the case since supersecret should get to 1m by then.

I'd like to see supersecret go up faster. This way, one could raise the lower bounds of the sieving range, gaining a small performance surplus...


Anyways I guess I feel as though I'm pretty much alone in supersecret with only 40 results per day,

I did some tests from time to time, but at Riesel, it takes not much longer to do a first time test (currently @ n=1M). With >50 k's currently under investigation by B2, it's far more likely do find a prime there...


Perhaps everyone could submit their out of range results especially those >830K, probably knock at least a day or two worth of test out.

Remember even if you have already reported the range as finished you could still submit the out of range results.

I've rewritten my LowNFinder (http://www.mystwalker.de/LowNFinder.zip) to take a paramter as lower bound (an optionally another one for the higher bound (Riesel has 600K as lower bound for sieving)).
Just try e.g.
LowNFinder.bat 830 to get all factors for 830K < n < 1M - very useful for big factrange files...
If there's interest, I can add the option to enter the bounds directly in the program instead of by parameters only (better when you use a window manager instead of a command shell ;) ). Well, maybe I'll do this anyway, it's just a matter of priorization then...

vjs
07-07-2004, 11:31 AM
I don't have java installed on my machine I just don't like it very much, had a very early verison installed which was a mess. I'm sure it's improved by now.

Do you have a write up on what your program does exactly,

Can you simply enter a particular k then it searches for all of the lowest n between 830 and 1M for example ????

If so I may try it out.

I also have a little batch program that will allow you to run your own k/n's.
Great for doing supersecret work while getting credit to your account.

I'd like to see supersecret get up to 1m as well, there are something like 3600 k/n pairs between 900k and 1M.

It would take about 25 good machines about 2 weeks to do all of these values.

The other (best) option would be to have those who manage the ques to create 3-5 new accounts which assign credit to the top 3 or 5 teams.

Say

TeamPrimeRib-Secret
Anandtech-Secret
TeamRetro-Secret
ExtremeDC-Secret
DPC-Secret

They could also do the same for the garbage account.

If someone were to do this I could probably organize a gauntlet to get all n<1M done. IF we have enough competition between team here we could have some type interteam race to 1M...

Mystwalker
07-07-2004, 01:41 PM
My program simply filters out every factor (the whole line, of course ;) ) of the factrange.txt whose n is in the specified range.
E.g. LowNFinder 830 will give you all lines with 830K < n < 1M - regardless of the k.

It's just (pseudocode)


BEGIN
Foreach line {
if (lowerBound < n < higherBound) then {
output line
write line to new file
}
}
END

Mystwalker
07-08-2004, 07:31 AM
Just got an idea concerning the factors out of range:

As proth_sieve has no SoB.dat for them, it cannot distinguish between factors for already known tests and for tests unknown so far.
Thus, it is quite likely a factor outside the range will be an excluded one. :(
Nevertheless, some are not - and those are worth a few hours of PRPing. :)

royanee
07-13-2004, 11:08 PM
Well, not sure if it's unique yet, but I just got:

317864024961919 | 28433*2^969049+1


Yay! :)

Edit: Just realized that I found one like that before, only I know that it's unique.

269.690T 21181 975668

vjs
07-14-2004, 11:48 AM
I'd suggest you submit it anyways, if it is a unique factor you'll get something like 5 points for it, not much but hey it will save an hour of coumputing time on one machine for 5 seconds worth of cut and paste.

vjs
07-14-2004, 11:49 AM
July 14th doesn't get any closer that your estimate of mid July, :thumbs:

Good work

Ever think about sieving with those celerons???

royanee
07-14-2004, 05:20 PM
It's actually a few hundred points if it's unique. I did submit it, so...

*checks the scores page* looks like it was a duplicate. Ah well. I got 481 points for the other one. :)

Death
07-15-2004, 03:25 AM
Originally posted by vjs
July 14th doesn't get any closer that your estimate of mid July, :thumbs:

Good work

Ever think about sieving with those celerons???

huh =) I speed it up using some old pentiums like 300 MHz

and i can sieve only with some service starter not as usual running. don't want to scare users =)))

Troodon
07-15-2004, 04:53 AM
Originally posted by Death
and i can sieve only with some service starter not as usual running. don't want to scare users =)))

Have you tried using runh.exe? It's a program that just hides the window.

ceselb
07-19-2004, 04:21 PM
Originally posted by vjs
Please feel free to condense for space

Another: 460000-470000 VJS [Late: Sept]

vjs
07-19-2004, 06:45 PM
That's fine as long as mike can keep them as 1T ranges

on his page helps me keep track of the factors

http://www.aooq73.dsl.pipex.com/ui/5202.htm

Quote:

--------------------------------------------------------------------------------

quote:
--------------------------------------------------------------------------------
Originally posted by vjs
Please feel free to condense for space
--------------------------------------------------------------------------------
Another: 460000-470000 VJS [Late: Sept]

--------------------------------------------------------------------------------

ceselb
07-20-2004, 12:32 AM
Originally posted by vjs
That's fine as long as mike can keep them as 1T ranges


I *think* they'll stay, if not give mike or me a shout
Impressive speed, btw. What are you running?

vjs
07-20-2004, 08:47 AM
Originally posted by ceselb
I *think* they'll stay, if not give mike or me a shout
Impressive speed, btw. What are you running?

I posted a bunch of my scores in the benchmark thread, just basically a bunch of computer but 3 are extremely fast at sieve.

3 x 2500 Mhz bartons
2 x 1800xp
2 x 866 p3's
1 x 800 p3
2 x 533 Celeron
1x 1000 mhz athlon

Totals some where around 3500 kps hopefully I can hold on to all these machines for a while.

ceselb
08-30-2004, 01:21 AM
Originally posted by vjs
I was looking at the Gap ranges last night and was wondering if there is any reason to resieve or check the larger gaps, for missed factors once all of the low n sieve is done???

http://www.aooq73.dsl.pipex.com/gaps/Gaps_n3_20_p01u.htm

MikeH is checking some suspicious holes himself. You'll need to talk to him if you want to do something.

MikeH
09-06-2004, 02:39 AM
MikeH is checking some suspicious holes himself. You'll need to talk to him if you want to do something. I'm not checking anything right now, all the current gaps look acceptable to me. Nuri was the last one to organise a hole searching effort, but that is complete and I think we're all happy again now.:)

Nuri
09-11-2004, 12:15 AM
I agree. We've done enough of checking the holes back there, and we don't need to do that for the time being.

Keroberts1
09-13-2004, 09:26 PM
moment of sieving glory. Two factors have popped out in the last 15 minutes on the two machines i have present at my home. Both have been in the active range and have N's within 1,000 of each other. What are the odds of that. I'm not sure cause i already submitted the factors adn don't wantto dig for the file again but i think they were both the same K too.

Mystwalker
09-14-2004, 07:05 AM
I think we still need the sieving speed of both computers to calculate the odds.

vjs
09-14-2004, 08:36 AM
456.430T 24737 7031623
454.876T 27653 7030689

Very cool and about 130K points as well

vjs
09-14-2004, 10:30 AM
:drums: sieve main effort +400T :|party|:

:elephant: prp almost at 700m :bouncy:

:thumbs: Double check supersecret almost at 1m :cheers:

And P-1 has a little padding at 700m :smoking:

I think this is a milestone hopefully we will have a prime shortly.

My bets are still on for 67607 by 7.25m.

Keroberts1
09-14-2004, 12:31 PM
about 450 K and 550 K adn just happen to be my only machines sieving at the moment

Mystwalker
09-16-2004, 06:56 PM
Originally posted by Keroberts1
moment of sieving glory. Two factors have popped out in the last 15 minutes on the two machines i have present at my home. Both have been in the active range and have N's within 1,000 of each other. What are the odds of that. I'm not sure cause i already submitted the factors adn don't wantto dig for the file again but i think they were both the same K too.

about 450 K and 550 K adn just happen to be my only machines sieving at the moment

I hope there are no theoretical errors in my calculations...

Assuming a factor density of 1 per 30G, I came to the conclusion that the chance of finding a factor within the active range within 15 minutes on the 450K machine is roughly 1:7.000.
The chance the other machine finds a factor within a "N distance" of 1.000 within 15 minutes seem to be ~1:570.000.
So, when you pick some specific 15 minutes, the chances are 1:3.990.000.000. If you are not picky about the time these to events have to take place (although they have be within 15 minutes to each other), I guess the probability should be 1:285.000 - the likeliness the faster machine finds a suitable factor within a 30 minutes time frame (15 min before, 15 min after) of the other factor finding.


Originally posted by vjs
:elephant: prp almost at 700m :bouncy:

And P-1 has a little padding at 700m :smoking:

My bets are still on for 67607 by 7.25m.

I guess you mean 7m, don't you? Otherwise, I seem to have missed something... :D
Concerning the next prime, I think it will be 7.4M< N < 7.5M, as this is the range of numbers that will likely be (completely) tested end November / early December, SB's prime finding season. :p

Mystwalker
09-20-2004, 04:30 PM
I plan to resieve this range of mine:


RATIO: 338245.56G - 338431.67G, size: 186.11G, est fact: 7 (338245560022561-338431671692333) R: 1.042, S: 0.323
#*# ( 186.11G) : 337500-340000 Mystwalker [complete]

186.11G really seems a little too much. :( Are there any reasons against this?
I won't start the sieve within a day...

Nuri
09-20-2004, 04:45 PM
Not as a reason against resieve, but there's an excluded factor in between ( 338315236675243 55459 115090 519 9 e), which implies that you sieved the range.

I guess it was bad luck on your side to hit a hole that large.

vjs
09-20-2004, 05:29 PM
186.11G really seems a little too much. I generally keep all of my factXXX.txt files until after mike has cleared the range or I have no gaps etc.

You might want to look for these files and submit them as well.

This is part of the reason why I also submit factrange and factexcl, I'm not sure how these files work on the server stand point but I would think that it would help prove that this range was done correctly and reduce the gaps etc.

I'm sure there are cases where large gaps occur by chance hence the words "acceptable gaps". Some gaps have to be larger than acceptable even if done correctly.

I also noticed

http://www.aooq73.dsl.pipex.com/gaps/Gaps_n3_20_p01u.htm

You have some smaller "acceptable gaps" if your splitting ranges you may wish to check if all of these large gap ranges are done with one particular machine, retest on another etc.

There was an issue in the past where someone had an o/c'ed or overheating machine that was missing factors, so this could be your case???

Let us know what happens, at this point I wouldn't redo the range unless you can actually attribute all large gaps to one particular machine, or can somehow document the fact it wasn't done.

If you do decide to redo, how much of the range to you redo?
The gap may be 338245.56G - 338431.67G, but maybe you missed 338250-338500 entirely and someone else submitted a factor at 338431.67G for example.

Since the lowest reported gap for that region is 106G your gap could actually be upto ~290G.

In any respect keep up the good work!!!

vjs
09-20-2004, 05:30 PM
Looks like Nuri beat me to the punch on your question.

royanee
09-21-2004, 05:21 AM
Originally posted by Mystwalker
I plan to resieve this range of mine:


RATIO: 338245.56G - 338431.67G, size: 186.11G, est fact: 7 (338245560022561-338431671692333) R: 1.042, S: 0.323
#*# ( 186.11G) : 337500-340000 Mystwalker [complete]

186.11G really seems a little too much. :( Are there any reasons against this?
I won't start the sieve within a day...

May I sieve it? Or rather... now that I'm more than 1/3 of the way through sieving it... is it okay if I finish it, and somehow attribute the results to you? (If I am not logged in, you get the results, don't you?)

Mystwalker
09-21-2004, 07:29 AM
Originally posted by royanee
May I sieve it? Or rather... now that I'm more than 1/3 of the way through sieving it... is it okay if I finish it, and somehow attribute the results to you? (If I am not logged in, you get the results, don't you?)

No problem, just continue. If you find a factor, you can claim the score for yourself if somehow possible. I think that's just fair..
Did you already find something?


Originally posted by vjs
if your splitting ranges you may wish to check if all of these large gap ranges are done with one particular machine, retest on another etc.

I'm quite sure I did that range on the cluster, which means the range was split in 10G parts and distributed other ~30 PCs...
There might have been a problem when collecting the results, though. I'm unaware of such a glitch (I only once lost some results when deleting them before submitting :bang: , but I immediately noticed it), but I won't bet my life on it. ;)

Joe O
09-21-2004, 10:44 AM
Originally posted by Mystwalker

I'm quite sure I did that range on the cluster, which means the range was split in 10G parts and distributed other ~30 PCs...


What program(s) are you using? Which version[s]? The reason that I ask is that some versions of SoBsieve had problems. Look at my last post in this thread. (http://www.free-dc.org/forum/showthread.php?s=&threadid=6265&perpage=25&pagenumber=3)

royanee
09-21-2004, 04:38 PM
Originally posted by Mystwalker
No problem, just continue. If you find a factor, you can claim the score for yourself if somehow possible. I think that's just fair..
Did you already find something?


Maybe 1 factor (could be excluded since .dat creation) and maybe 1 out of range n = ~115k

Mystwalker
09-21-2004, 05:46 PM
Originally posted by Joe O
What program(s) are you using? Which version[s]?

All use proth_sieve 0.42 Linux version.

vjs
09-22-2004, 08:24 AM
Originally posted by Moo_the_cow
Why not?

I don't understand...

Keroberts and I are tring to finish everything between 450-500 before the main effort catches up... Yes its a silly goal etc doesn't matter all slow machine sieve etc, but hey it's fun that's why were here.

450002-460000 keroberts1
460000-470000 VJS [Complete] 337 factors
470000-470005 KenG6 (ETA: mid Jul) (pm sent Aug 30th)
470005-480000 VJS (ETA: Late Sept)
480000-481000 Moo_the_cow
481000-490000 VJS

498000-504000 Complete

I know your big in factoring and contribute etc, so I was wondering if your still working on the range, just have it reserved etc,

According to

http://www.aooq73.dsl.pipex.com/ui/2537.htm
and
http://www.aooq73.dsl.pipex.com/gaps/Gaps_n3_20_p04u.htm

You havn't submitted anything so I was wondering.

I asked 550000-550500 Mystwalker about his, he said using it as a backup etc which is great.

I actually have one machine working on

560000-562000, but it won't finish for months and big chances I'll never able to recover those results even if it does finish it's range so it's not posted/reserved nor should it be.

(Snipped /ceselb)

So is why not? yes I'm working on it? No it's mine? Yes you can have it, etc.

Thanks for your time.

Moo_the_cow
09-23-2004, 12:35 AM
(quote removed /ceselb)

I meant that I didn't see why you thought that I wouldn't finish the range. I haven't submitted any results, because I usually submit all my results at the same time when the entire range is done. Unlike some people, I don't submit factors at the end of every day.

I don't mind if you want some of my range. If that's the case, take 480500-481000, and I'll just sieve 480000-480500.

royanee
09-23-2004, 01:33 AM
I usually do at the end of the sieve, unless I am checking the fact.txt file and see on with n near 6m or 7m, etc. so that I can save someone a proth test. :) But that's pretty rare for me.

vjs
09-23-2004, 10:39 AM
Thanks Moo,

I didn't mean to insinuate that you were not going to finish etc (Although I guess I did) should have asked about your progress instead. I'm generally more like royanee I check the fact file and submit when I have factors near the double check bounds or within 200K of prp. Might just save a factor effort as well.

However doing >300G per day combined :D with all my machines generally means I have at least one factor a week within the bounds, Looking at sobistrator right now I have a factor around 7.08, ~7.2, and :trash: ~6.65. But nothing urgent so I'll probably submit on monday.

Thanks for everyones efforts Moo, ceselb, Mike.

P.S. There is an very off chance I might get a true quad 2.8 HT Xeon 4mb, for a week or two, might be my first decent factor machine I could play with.

Death
09-24-2004, 04:27 AM
if you are looking at sobistrator and see new factors -

WHY THE H8L you don't press submit button??????? :crazy:

why???? why???????? :cry:

vjs
09-24-2004, 12:11 PM
Not all of my machines are networked... so dividing and submitting a range once or twice a week insures I don't miss any portions of a range etc.

Don't worry I make sure those factors close to prp doublecheck or garbage get submitted :thumbs: and simply use sobistrator to monitor the activity "new factors", not all of my computers are on a network.

Been quite a bid of activity on secret, supersecret and garbage lately is that you death???

Perhaps we should move all of this to the discussion.

Death
09-24-2004, 01:36 PM
nope. that's somebody else... i'm currently stuck with boinc lhc@home. they only have a 2000 users to test client and I'm on board.

maybe l8r...

and you say that sobistrator is on a machine w/o network am I right? because in other case I just can't understand how you can look at sobistrator and don't press submit button? =)) or just turn on auto-submit feature..

and don't worry about discussion. I bet ceselb kill our posts ina couple of days... =))

Moo_the_cow
09-26-2004, 02:45 AM
Originally posted by Death
if you are looking at sobistrator and see new factors -

WHY THE H8L you don't press submit button??????? :crazy:

why???? why???????? :cry:
I don't know if this question was also directed towards me, but the answer is that I'm too lazy to even check fact.txt for any new factors.

I didn't mean to insinuate that you were not going to finish etc
Actually, I don't think I have enough computing power to finish the 480000-481000 range before the main sieve effort catches up, so you were probably right.
Good luck in finishing your 450-500T sieve effort :)

royanee
09-26-2004, 06:02 PM
Originally posted by royanee
Maybe 1 factor (could be excluded since .dat creation) and maybe 1 out of range n = ~115k

Found one:

338.322T 24737 15174463

As you said, probably a small chunk was lost for some strange reason. But a missed factor isn't such a bad thing as a missed prime. :)

vjs
09-28-2004, 12:48 PM
Curious about how the increase in the prp/p1 client speed effects sieve effectiveness.

It was stated somewhere before that due to factor density for a range, sieving speed, prp time per n, possible primes, double checks, duplicte factors etc, it was still efficent to sieve to some very high T much greater than present T.

Just wondering how a much faster P1/prp client changes sieve.

ceselb
09-28-2004, 01:14 PM
Quite a bit is my guess. :D

I don't know really, but maybe nuri can do some calculations. :notworthy

vjs
09-28-2004, 01:31 PM
I'm also throwing this out there for consideration, after the next prime is found and we create a new data file.

It might be benifital to remove an additional k :eek: (i.e. the prime and another)

For example the largest K ???67XXX?? takes something like 15% of the processing time of sieving but only contributes 5% of the eliminated factors, (I know these numbers are off), but something like this might be of importance. Or breaking the sieve into two sections two data files dunno. Might be more effort than it's worth, or counter productive.

Keroberts1
09-28-2004, 01:53 PM
i don't think that the efficiency is effected much at all because the new client is only good for sse2 machines which shouldn't have been sieving anyways. These machines would have been best suited for P-1 and now for prp. The prp boundary moving faster only means more of our factors will enter the active range quiker. The only thing that effects sieve efficiency that i see revelant here is the prp depth. As th prop depth gets farther along the number of factors that will have already been passed by the fron end of the first checking phase will be greater hence more wasted factors.

ceselb
09-28-2004, 02:05 PM
There is work being done on a non-SSE2 version too. That will have more effect.

Mystwalker
09-28-2004, 06:31 PM
Originally posted by ceselb
There is work being done on a non-SSE2 version too. That will have more effect.

True, but without big performance increases. It's just like a junction, directing SSE2-capable CPUs to the new code and all the others to the old code.
The enhancements are SSE2-specific and mainly the result of George generalizing GIMPS-specific code to general purpose code (SoB, Riesel, PSP, 321, ... benefit from it).

Keroberts1
09-29-2004, 04:24 AM
it has been said that there will be non-sse2 versions being released too.

Death
10-07-2004, 04:03 AM
Originally posted by engracio
SOLD!!!!! for 5 cents. Since mike is not saying anything (thinking it does not matter to him either way, better safe than sorry) I will upload the other two files as I complete the range. Tha fact.txt is uploaded as soon as it is found. Thanks for the info, Death guess you were correct. :) :) :)



e

death is always correct

vjs
10-08-2004, 12:42 PM
:swear: :swear: :taz: :taz: :cage:

Arrrrgh just submitted two factors around 7.05n and 7.08n, they both popped up last night. A day or two late I guess

vjs
10-19-2004, 02:18 PM
Just to stir up a little excitement in the sieve and P-1 area, challanges back and forth here is what happened.

Way to go sievers.... :cheers:


dmbrubac,
Just letting you know I found this factor in your range:
481790636706229 | 24737*2^7263031+1

What a coincidence

Sieve 480500-490000 VJS
P-1 7262000 7280000 dmbrubac 0 [reserved]

7th dmbrubac ... 3326948.25
8th vjs ............ 3142839.40

I feel as thought I'm going to steal 7th from you with this factor worth about 80k.

vjs
10-23-2004, 06:16 PM
I was looking at this range, I know it is way way ahead of where we are but the expected factors is very high per 1000G

1763615.23G - 1812168.61G, size:48553.38G, est fact:11846
244.0 factors per 1000G

Alot higher than one of my previous ranges

460000G - 480000G, size: 20000G, factors found 689
34.45 factors per 1000G

Two things

First, the predicted factors per G must be incorrect,

Second, I tried sieving the range but the client wouldn't work, is there a maximum G the client will accept???

Thanks,
VJS

royanee
10-24-2004, 05:18 AM
The program can only sieve up to 2^50, which is just a little bit smaller than 1125900 G

vjs
11-02-2004, 01:06 PM
Just wanted to let everyone know I'm getting a slight slow down on my ~550T ranges. I remember my complete range at 2^49 (560T) also being a little slower

I was wondering if anyone else is seeing the same effect...

I guess this is directed mostly toward Nuri, Keroberts, a mklasson

Keroberts1
11-02-2004, 05:29 PM
yes I actually did see this effect. Not sure about thereason that would probably be a question for MKlasson

Mystwalker
11-02-2004, 06:41 PM
Just finishing up my 550T range - sieving speed seems to be ~10% lower than it used to be, but I strangely don't encounter the 50% slowdown anymore I had earlier...

vjs
11-02-2004, 11:20 PM
10% sounds about right, my quick guestimates show my fastest computer down from ~700 kp/s to 650 kp/s....

vjs
11-03-2004, 01:20 AM
e

RU a round??

vjs
11-12-2004, 02:10 PM
I'm glad to see the renewed sieve effort, A couple of us have been trying to get this done for some time, and I only wish I had personally started sieving earlier.

Had we all started earlier, ie while around 3m, more of the founds factors would have eliminated more tests. Thus far with the sieve effort is almost at 500T and we have eliminated 31% of the factors between 1m and 20m . Even if we find 4 more primes before 20m at least the majority of these factors were/are useful, and the more we find the better. In addition we will more than likely double check more than half of the k's below 20m. Point being even if you find a k/n below 7m you have eliminated a double check. If it's more than the currrent threashold of ~7.35M you will have eliminated 2 checks.

Sad news is there is a point of diminishing returns, good news is we are not there yet. Our current client can sieve upto 1000T so basically we are half way there. If we sieve past 1000T is questionable, In addition there are k/n pairs beyond 20m that we will have to test as well but not for a few years. Hopefully the project will have eliminated alot of k's by then.

My main point is we should try to drive up the sieving T up as fast as possible to some T value and stop sieving.

So how long will it take us to get to 1000T???

Well that of course depends on how many people we have helping...

Here are some numbers. Currently we are finishing around ~1000G per day which is rougly 11,500 kp/s.

At this rate we will get to 1000T in about 16 months.

So how many computers will it take to sieve to 1000T or 1P, in 6 months?

Well if we increased our total sieve speed to the <b>primorial rate</b> of 30029 kp/s we could do it. This could be done with a combination <100 of reasonable xp1800 and P3-1000mhz computers.

If I had a hundred of my fastest sieving computer (a 2500mhz Barton) it would only take about 3 months.

Thank-you all for helping out.

vjs
11-26-2004, 01:35 PM
factor=329.299T k=21181 n=4604 score=0.001 Fri 26-Nov 2004 dmbrubac

I guess every factor helps... But wow that's a small n!!!

I think you not only hold the 100K record but that's one of the smallest I've seen with sieve.

Except for Joes P-1 is really tiny

Factor=746.073P k=4847 n=351 score=1.021 Joe_O

vjs
11-27-2004, 12:01 PM
Nice range E :thumbs: you too larsivi

Glad to see the regulars reserving larger ranges, I'm sure Mike and cesleb will appreciate it.

ceselb
11-27-2004, 08:34 PM
Originally posted by vjs
Glad to see the regulars reserving larger ranges, I'm sure Mike and cesleb will appreciate it.

Yes, I do.

maddog1
11-27-2004, 08:43 PM
Speak of the devil...how about this (obtained through sieving)
490586292728209 | 24737*2^1303+1 :D

larsivi
11-28-2004, 11:02 AM
Originally posted by vjs
Nice range E :thumbs: you too larsivi

Glad to see the regulars reserving larger ranges, I'm sure Mike and cesleb will appreciate it.

Well, thank you. The point is that except for one slow, noisy machine, I seldomly know what power is available for more than one or two weeks ahead. I therefore try to reserve almost exactly as much as I know I'll be able to process.

engracio
11-28-2004, 11:55 AM
Well, thank you. The point is that except for one slow, noisy machine, I seldomly know what power is available for more than one or two weeks ahead. I therefore try to reserve almost exactly as much as I know I'll be able to process.


Ya............



e:(

vjs
11-29-2004, 12:59 PM
Does anyone have any data on how much time processing power each k takes?

I know I did some of this a while ago seeing how much of a speed up we would see from eliminating various k...

Do some k's play well together, are some outstandingly different such as 67907??

vjs
11-30-2004, 01:51 AM
I know sometimes my rate is up and down depending upon which machines are active. Yesterday lost a couple slower machines probably 600kp/s total :( combine that with the temporary high n-sieve effort, I'm probably down to around 2000kp/s.

I know 2mp/s is alot but it seems like I'm not contributing much, by the end of the week should be back up around 3mp/s :D

royanee
11-30-2004, 02:45 AM
Originally posted by vjs
I know 2mp/s is alot but it seems like I'm not contributing much, by the end of the week should be back up around 3mp/s :D

hehe, well at least you're contributing to the project. :) I had an interesting change in productivity. After a mobo swap with a friend (I had borrowed his Athlon XP 2800+), I was left with an Athlon XP 1600+... I went from 670 or so kp/s (the actual numbers are in the speed comparison thread) to about 400 kp/s (it was 360 kp/s, but I'm using the wine + Windows proth_sieve_cmov speed trick). It's so much slower, but not-too-surprisingly, much quieter and fewer stability issues. :)

Keroberts1
12-02-2004, 02:06 PM
I was left with an Athlon XP 1600+... I went from 670 or so kp/s (the actual numbers are in the speed comparison thread) to about 400 kp/s (it was 360 kp/s, but I'm using the wine + Windows proth_sieve_cmov speed trick). It's so much slower, but not-too-surprisingly, much quieter and fewer stability issues.

This was a quote by someone in the co-ordination thread. What does this wine plus windows CMOV mean is this just to run CMov on linux? Or is it trick for use on windows machines that are less stable. I have a athalon 2000 that crashes pretty quickly when i start CMOV. The heat is the main issue but i haven't been able to get that takencare of yet so is there a way to run the siever at a slightly slower pace as to not create so much heat. I'm pretty sure its all of the floating point operations that are generating the highy temperatures. Its a little high when I'm not runing the siever but when I am it peaks around 88 degrees C. Yes i know that is bad so noone needs to tell me that.

larsivi
12-02-2004, 02:41 PM
Originally posted by Keroberts1

This was a quote by someone in the co-ordination thread. What does this wine plus windows CMOV mean is this just to run CMov on linux? Or is it trick for use on windows machines that are less stable. I have a athalon 2000 that crashes pretty quickly when i start CMOV. The heat is the main issue but i haven't been able to get that takencare of yet so is there a way to run the siever at a slightly slower pace as to not create so much heat. I'm pretty sure its all of the floating point operations that are generating the highy temperatures. Its a little high when I'm not runing the siever but when I am it peaks around 88 degrees C. Yes i know that is bad so noone needs to tell me that.

I think the conclusion in an earlier discussion was that the windows binary is more efficiently compiled than the linux binary. This combined with the fact that linux is more efficient than windows meant that running the windows proth_sieve under Wine in linux was faster than running it under windows itself. The stability issue in question was about the hardware, not software.

royanee
12-02-2004, 05:49 PM
Actually, that was me who posted that. It is faster in Linux to run the Windows binary under Wine, than to run the native Linux binary. (Though I haven't compared it to the statically linked Linux binary). I don't know how fast this machine goes on Windows, but it's probably faster, unless the terminal process in Windows is slow, which is very possible.

Grafux
01-09-2005, 11:51 PM
Removed reservation /ceselb

Correct me if I'm wrong, but that should look like this:

Grafux:~ Grafux$ ./NbeGon_010_osx -s=SoB.dat -f=SoB.del -d=1.42 -p=630650G-631000G
N-be-gone v0.10sob (OSX) 52-bit 2003/01/24, Phil Carmody
# DLOG[10/1.420000] Huge (5160*3632)
# 2.6 p=630650000000077 (#f=0) hash overflows: 0|0|0|0|0|0
# 4.7 p=630650000048143 (#f=0) hash overflows: 4|0|0|0|0|0
# 7.4 p=630650000113669 (#f=0) hash overflows: 7|0|0|0|0|0
# 10.4 p=630650000179319 (#f=0) hash overflows: 11|0|0|0|0|0
# 13.5 p=630650000244743 (#f=0) hash overflows: 17|0|0|0|0|0
# 16.7 p=630650000310299 (#f=0) hash overflows: 26|0|0|0|0|0
# 20.0 p=630650000375873 (#f=0) hash overflows: 32|0|0|0|0|0
# 23.1 p=630650000441387 (#f=0) hash overflows: 40|0|0|0|0|0
# 26.2 p=630650000506909 (#f=0) hash overflows: 47|0|0|0|0|0
# 29.3 p=630650000572421 (#f=0) hash overflows: 53|0|0|0|0|0
# 32.5 p=630650000637971 (#f=0) hash overflows: 56|0|0|0|0|0

So the "G" tells the number to be "giga" or milllion? Is that right?

vjs
01-10-2005, 12:22 AM
Looking good glad you got it figured out!!!

Your correct on the G and doublechecking your number

630650000637971

630650,000,637,971 (1G = 1,000,000,000) your good to go.

against one of my factors for comparison (My range 504000-513000)

630650000637971
504769849354361 | 10223*2^11749517+1


:D

Grafux
01-10-2005, 12:44 AM
"G" is billions, of course. You can tell I'm not a mathematician. ;)

So I haven't found any factors/primes, unless I see a newly modified/created 'fact.txt' in the same folder as SoB.dat, right? Otherwise, NBeGon will finish sieving and produce no output if there are not significant findings in my range.

Hope I find some after all this!

Mystwalker
01-10-2005, 08:07 AM
Originally posted by Grafux
So I haven't found any factors/primes, unless I see a newly modified/created 'fact.txt' in the same folder as SoB.dat, right? Otherwise, NBeGon will finish sieving and produce no output if there are not significant findings in my range.

In addition, NBeGon will increase the #f count (f for factor). :)

Joe O
01-10-2005, 10:22 AM
If I'm not mistaken, the -f=SoB.del in the parameter list will cause NbeGon to write factors to the file SoB.del not fact.txt

Mystwalker
01-11-2005, 06:19 AM
Wasn't it the other way round (SoB.del as default)?

Joe O
01-11-2005, 12:03 PM
Originally posted by Mystwalker
Wasn't it the other way round (SoB.del as default)?
Yes, AFAIR SoB.del is the default.
I posted what I posted because in Grafux's first post he had explicitly set it to SoB.del, and in his next post he talked about fact.txt being modified/created. He would be a long time waiting for that, and might miss some factors.

cedricvonck
01-13-2005, 12:25 PM
630600-630650 CedricVonck [complete]

2 factors:

630632836737929 | 4847*2^17664591+1
630647310534011 | 21181*2^10397828+1


I do not think that the sieve submit form did take my username into account.
Even if I had logged in.

vjs
01-13-2005, 12:35 PM
Sometimes you have to

1. Close all iexplorer windows.
2. Login and check the remember me box.
3. Close all iexplorer again
4. Go directly to the page http://www.seventeenorbust.com/sieve

Now in the top left corner it will say logged in as ...

5. Submit your factors


A few of my computers show this behavior from time to time.

Nuri
01-13-2005, 03:11 PM
Alternatively

1. login
2. go to sieve submission page
3. if your name does not appear on the upper left corner, simply refresh the page

it will appear.. :D

vjs
01-13-2005, 05:18 PM
:bang:

Nuri, :notworthy

Sometimes the simplest solution is the easiest, don't you hate cookies and iexplorer. Neither do what you want them to but broadcast PI to spammers . :mad:

Silverfish
01-19-2005, 08:03 PM
chris stats say 49 factors after [complete], for the recently completed range. However, I made a copy of his results, and only 6 of them are unique factors (from fact.txt), which I gather is what should be listed after [complete]. The others were duplicates/excluded, or out of range factors.

chris
01-19-2005, 08:39 PM
I only claimed factors as new as the verifying script told me (basically, i do not 'need' a single factor marked as new, BUT then the script has an error. Thats why I posted all the .txt files made by my most likely only sieve ever to be done by me).

FYI i post it again:

630500-630600 chris [complete] 49 factors found

As this is probably my first and last sieve (going to secondpass), i post the details :-)

fact.txt:

Factors
630502283108611|4847*2^15459183+1,630518226651589|
55459*2^4973386+1,630524523748213|10223*2^6076361+
1,630553039164781|24737*2^17095303+1,6305564570590
97|4847*2^13880991+1,630594205955369|55459*2^15377
554+1,
Verification Results
630502283108611 4847 15459183 verified. 630518226651589 55459 4973386 verified. 630524523748213 10223 6076361 verified. 630553039164781 24737 17095303 verified. 630556457059097 4847 13880991 verified. 630594205955369 55459 15377554 verified.

Factor table setup returned 1
Test table setup returned 1

6 of 6 verified in 0.32 secs.
6 of the results were new results and saved to the database.

factexcl.txt:

Factors
630501819547291|67607*2^1567299+1,630503020460129|
4847*2^12546255+1,630508296720077|67607*2^13350107
+1,630509258453093|10223*2^10718789+1,630513570390
383|4847*2^6485031+1,630513653497813|10223*2^18473
453+1,630516073868297|10223*2^7204493+1,6305179379
90383|19249*2^4328798+1,630518581425899|27653*2^92
28813+1,630518986879861|55459*2^7967962+1,63052218
8344579|55459*2^17492518+1,630523650958871|24737*2
^3453127+1,630529797283667|55459*2^2620798+1,63053
3550189827|21181*2^7338860+1,630533935843711|55459
*2^18557986+1,630536703592349|33661*2^13995144+1,6
30537711517319|33661*2^5097720+1,630541395940919|6
7607*2^15900571+1,630542285338183|27653*2^11689845
+1,630547089263687|33661*2^7170624+1,6305531315598
23|22699*2^10136206+1,630556063381957|10223*2^7741
985+1,630557910291493|67607*2^6952955+1,6305621713
18039|67607*2^7043747+1,630568702912541|33661*2^15
856800+1,630569569576691|24737*2^6590911+1,6305707
85620553|55459*2^15556186+1,630572376530153|55459*
2^3814522+1,630573985178651|67607*2^2287739+1,6305
74198756351|67607*2^17489339+1,630575029888477|102
23*2^18912665+1,630576293257283|55459*2^5503306+1,
630581350267969|33661*2^16621368+1,630581350267969
|33661*2^16621368+1,630581515253609|19249*2^174882
02+1,630581918276861|33661*2^7961568+1,63058306705
7381|10223*2^1385117+1,630583584835913|24737*2^401
9959+1,630583699871819|24737*2^19129543+1,63059540
4666527|10223*2^1831349+1,630597639368183|55459*2^
11977918+1,630599627143489|55459*2^2975818+1,63059
9656162337|67607*2^5539827+1,
Verification Results
630501819547291 67607 1567299 verified. 630503020460129 4847 12546255 verified. 630508296720077 67607 13350107 verified. 630509258453093 10223 10718789 verified. 630513570390383 4847 6485031 verified. 630513653497813 10223 18473453 verified. 630516073868297 10223 7204493 verified. 630517937990383 19249 4328798 verified. 630518581425899 27653 9228813 verified. 630518986879861 55459 7967962 verified. 630522188344579 55459 17492518 verified. 630523650958871 24737 3453127 verified. 630529797283667 55459 2620798 verified. 630533550189827 21181 7338860 verified. 630533935843711 55459 18557986 verified. 630536703592349 33661 13995144 verified. 630537711517319 33661 5097720 verified. 630541395940919 67607 15900571 verified. 630542285338183 27653 11689845 verified. 630547089263687 33661 7170624 verified. 630553131559823 22699 10136206 verified. 630556063381957 10223 7741985 verified. 630557910291493 67607 6952955 verified. 630562171318039 67607 7043747 verified. 630568702912541 33661 15856800 verified. 630569569576691 24737 6590911 verified. 630570785620553 55459 15556186 verified. 630572376530153 55459 3814522 verified. 630573985178651 67607 2287739 verified. 630574198756351 67607 17489339 verified. 630575029888477 10223 18912665 verified. 630576293257283 55459 5503306 verified. 630581350267969 33661 16621368 verified. 630581350267969 33661 16621368 verified. 630581515253609 19249 17488202 verified. 630581918276861 33661 7961568 verified. 630583067057381 10223 1385117 verified. 630583584835913 24737 4019959 verified. 630583699871819 24737 19129543 verified. 630595404666527 10223 1831349 verified. 630597639368183 55459 11977918 verified. 630599627143489 55459 2975818 verified. 630599656162337 67607 5539827 verified.

Factor table setup returned 1
Test table setup returned 1

43 of 43 verified in 1.19 secs.
42 of the results were new results and saved to the database.

factrange.txt:

Factors
630543670897061|4847*2^291423+1,630578348165369|67
607*2^22400699+1,
Verification Results
630543670897061 4847 291423 verified.

Factor table setup returned 1
Test table setup returned 1

1 of 2 verified in 0.03 secs.
1 of the results were new results and saved to the database.

You see, I do not 'claim' i found 49 new factors, I only BEEN TOLD I DID.

Cheers,

Chris

Mystwalker
01-20-2005, 06:56 AM
The number that mainly matters is the amount of new factors in fact.txt...
But you're right - this is mentioned nowhere AFAIK. :(

Grafux
01-20-2005, 01:21 PM
So my 'SoB.del' is still ... empty. :(

Can anyone estimate the time required to complete my range? NbeGon_010 has been running on this G4 Dual 867Mhz for 250 hours now, and I would like to reboot this machine in the near future.

Will quitting the process mean that I will have to start over?

vjs
01-20-2005, 03:24 PM
What is your client speed?

I would spect that you find 1 factor every 20-25G,

also you should be able to restart no problem it will pickup where it left off..

There will be a file called SoB.bat

The contents should be something like

-s=SoB.dat -f=SoB.del -d=1.36 -p=630692865456137

The problem with sieve is you could go days without finding factors then get 3 or 4 in a row.

Grafux
01-20-2005, 04:17 PM
The client speed is a dual processor G4 @867Mhz. That's roughly equivalent to a PIII @1200Mhz although I have no idea how optimized NBeGon is for the G4 architecture.

So my range must be somewhat large seeing as how it has taken over 10 days and counting?

Mystwalker
01-20-2005, 04:44 PM
According to the start of the program you posted back some time, you're running at approx. 20kp/s.

This means that you do 25G in ~350 hours. I guess each G4 has a sieving client running, right? If yes, you should have done ~40G. So a factor seems to come late.

Nevertheless, I'd suggest finding a project where G4s scream. Then, you could look for someone with there suboptimal architectures (maybe P3/Athlon). Now, you offer him to exchange work, in a way that you both (and both projects, of course!) profit!

vjs
01-20-2005, 05:56 PM
20 kp/s for a G4 does seems a little low, hopefully chucks new client will come out soon...

Mystwalker is correct if the G4 is only getting 20 kp/s another project would be a better solution. I know your an arsian like myself, check out the treads there... I believe TFY does work well with G4's, if this is the case kb9skw would probably be happy to do a swap. He is into both TFY and TPR-SoB, you could probably swap those 20 for 200 in a heart beat.

Grafux
01-20-2005, 06:11 PM
I will look into swapping, thanks for the advice.

Incidentally I have one CPU sieving and one Folding, so I guess 20k it is.

cjohnsto
01-26-2005, 06:28 AM
I was just wondering if some of the lower reserved ranges with no activity will be reassigned. These ranges are relatively rich with factors and some of the reservers have had no reported factors for months. Perhaps one of the coordinators should chase them up and see what is happening. As an example
alexr's last facor was submitted 02-Oct-2004. Did they complete the range? Still working on it? If not where did they abandon it?

vjs
01-26-2005, 10:42 AM
cjohnsto,

By looking at his ranges "alexr" it looks like he did actually complete them he just hasn't reported back yet. Several people have tried to contact him in the past including myself. You might also consider that there are alot of other possibilites "holes" that should be looked at a little closer. These are from ranges that infact have been reported as complete.

Joe and myself have been testing some of these ranges with limited success, what seems to work best is keeping an eye on the factoring people. Frodo for example "a really aware factorer" will report when he finds a factor that should have been found through sieving. I will generally go back and test that range fully.

Please do not retest any ranges, you may infact be tripple or even quadruple checking ranges. At present our best bet is to continue working on ranges that we know for certain havn't been tested yet. To create a double check sieve effort at this time would be a real nightmare for all involved.

Search around in the other treads regarding missed factors, double check sieve, high and low range sieving, etc... You'll see this has come up in the past.

cjohnsto
01-26-2005, 03:09 PM
I did not intend on starting these ranges again, merely wondering about thier status.

vjs
01-26-2005, 04:12 PM
Status...

Bascially most everything less than 26000 is either completed or being worked on.
Some of them have even been tripple checked.

As for the higher ranges? Time will tell but currently retesting those is not the thing to do without some careful consideration or proof of missed factors. There are a few above 200000 but they have lower success rates "guesses about missed factors or incomplete ranges declared complete" than those at lower p.

Nuri
01-26-2005, 07:07 PM
looking at alexr's stats page, it seems reasonable to assume they're completed.



even if not so, it really is not worth resieving those four ranges of him.

it's only a matter of bookkeeping. may be we can move his ranges to archive with a tiny note attached, like:

359000-360000 alexr [not reported as complete yet]
361050-361200 alexr [not reported as complete yet]
370000-372000 alexr [not reported as complete yet]
374000-374500 alexr [not reported as complete yet]

Guilherme
01-28-2005, 05:10 AM
I am using proth cmov for sieving. What result file should I send to SoB: factexcl.txt, factrange.txt, or both?

maddog1
01-28-2005, 07:44 AM
The only really necessary file is fact.txt
If you don't have that, it means you haven't found any unique factors yet-keep on sieving, it will appear eventually! :)
Generally, I think it's a good idea (for completeness) to include also factors from factexcl.txt and factrange.txt, but these won't save any tests...

Joe O
01-28-2005, 08:37 AM
But your factrange.txt file can help us. Please send your factrange.txt files to factrange@yahoo.com. It would be nice, but not necessary to put factrange somewhere in the subject line. Please zip it or compress it with your program of choice. We read .zip, .bz2, .bzip2, .7z, .rar, .tar, .gz, .gzip, etc, and of course, .txt. Thank you.

Guilherme
01-28-2005, 09:41 AM
OK, Joe O, I will send it to you.

engracio
01-28-2005, 10:11 AM
Joe O & vjs,


Weekly I have been sending the factrange.txt and factexl.txt to this url: http://www.seventeenorbust.com/sieve/ . The sieve results submission page, do you guys have access to those .txt files?

For your purposes do you still need those .txt files sent to: factrange@yahoo.com ?


e:)


http://www.clanhosts.com/dev/sobsig/sig.php?engracio

vjs
01-28-2005, 11:02 AM
e,

Yes we still need them set to factrange and we do apply all sob factors to the high-n dat, on a ~weekly basis, quite a few people are sending...

Any factors above 20M are actually ignored by the server, so if you are only sending factrange to sob we can't get those >20M.

You have to PM me with your e-mail I'll put you on the e-mail list, graphs stats etc etc.

Humm, have you been e-mailing those files to factrange? You might be one of the users who we don't know who you are???

If you only sent them to sob and not factrange@yahoo.com, then deleted them don't worry too much. Alot of the factors in factrange are actually excluded and duplicates. Last time I checked only about 5-10% of those were unique, I'd have to check with Joe he does a wonderful job with the db and can probably answer your question better than I can.

Joe O
01-28-2005, 11:53 AM
VJS' 5-10% may be a little low. The last two files I processed were 4% and 40%.
Input 97 factors Added 4 factors

Input 1963 factors Added 79 factors

engracio
01-28-2005, 12:33 PM
Joe O & vjs,


I was mainly sending the factrange.txt and factexcl.txt to sob keep track of the ranges I have completed. I'll keep on sending those .txt file to sob and email those .txt to factrange@yahoo.com every couple of weeks. vjs, does it sound like a plan? As for the previous data's, it is history.


e:)

Joe O
01-28-2005, 12:45 PM
engracio,
Sounds like a plan to me!

vjs
01-28-2005, 12:46 PM
Joe's really the person to answer question regarding where the factors are coming from.

But I do have a hacked together stat of what's been going on, I hope this one is the corrected version.




Lower Upper k/n's k/n's Factors Found Found Found
(n>) (n<) Orginal Remain Found by 10K by 2.5T by 3T+
0 1 28187 27992 195 0 39 156
1 3 53908 53787 121 0 23 98
3 8 131984 131700 284 0 0 284
8 10 53115 52991 124 0 0 124
10 20 265330 264755 575 0 240 335
20 30 648872 311109 337763 331271 3871 2621
30 40 648663 311598 337065 330829 3604 2632
40 50 649463 312441 337022 330923 3500 2599
50 60 649117 319329 329788 318159 5780 5849
60 70 648603 320929 327674 315355 6131 6188
70 80 648590 321341 327249 310861 10282 6106
80 90 648497 320569 327928 310689 11080 6159
90 100 648923 321483 327440 310061 11187 6192


0 1 28187 27992 195 0 39 156
dat % 100 99.31 0.69 0.00 0.14 0.55
1 20 450429 449446 983 0 240 743
dat % 100 99.78 0.22 0.00 0.05 0.16
20 50 1946998 935148 1011850 993023 10975 7852
dat % 100 48.03 51.97 51.00 0.56 0.40
0 50 2479522 1466373 1013149 993023 11277 8849
dat % 100 59.14 40.86 40.05 0.45 0.36
50 100 3243730 1603651 1640079 1565125 44460 30494
dat % 100 49.44 50.56 48.25 1.37 0.94
0 100 5723252 3070024 2653228 2558148 55737 39343
dat % 100 53.64 46.36 44.70 0.97 0.69



The dat %'s are a comparison to the number of k/n pairs in the orginal dat.

Please note that the 10K, 2.5T, 3T+, references don't accurately point out the true p-level.

Found by 2.5T (for example) - We have not sieved to 100M with all p<2.5T, also included would be the factrange submissions as they come in, those values from the maineffort, other higher ranges which have been completed, "frodo's missed factor range, etc"...

I'd like to add at this point people should not consider sieving at higher T values our best bet effort at the moment is to continue with the main effort until the limit of proth is reached or advised otherwise by the project owners.

I think it's also important to point out the n<20M k/n pairs factors etc. The orignal number is not the "orginal number of total k/n pairs possible" it's just the number we started with thanks to the main effort. If you compare 10M<n<20M to 20M<n<30M you can see how much deeper the main effort has sieved.

Also alot of those k/n pairs eliminated <20M are actually those included from the maineffort. Those less than 1M are our effort and factrange of course. (Joe correct me here if my comments are incorrect)

Death
02-01-2005, 03:26 AM
699598206253927|4847*2^12764247+1
699599049182837|22699*2^15108238+1

well, everything is allright

Stromkarl
02-11-2005, 08:52 PM
I was just reading the thread in the main forum regarding error testing.
Do new users of the sieve need to start with a known small range to verify that their computers are working properly? Or is the math not as much a problem with sieving as it is in the PRP effort?

Stromkarl

Joe O
02-11-2005, 10:24 PM
Originally posted by Stromkarl
I was just reading the thread in the main forum regarding error testing.
Do new users of the sieve need to start with a known small range to verify that their computers are working properly? Or is the math not as much a problem with sieving as it is in the PRP effort?

Stromkarl

That's a good idea, but we haven't done it. Just reserve a decent size range and "start your engines". Welcome to sieving! And yes, there is not as much a problem with sieving as in the PRP effort. Here you may just miss a factor. If you miss too many, gap analysis will find it, and someone will post about it. In PRP if you miss a prime, it will be a long time before double checking finds it.

vjs
02-12-2005, 04:13 PM
Doublechecking is really something that Joe, myself and a few nameless others are working on.

We are not truly doublechecking but sieving from 991<n<50M, where as the main effort is sieving from the secondpass n ~1.5M<n<20M currently. In the past several dats have been used 300K<n<3<, 3M<n<20M, and 1M<n<20M. Our large dat spans all of these dats and is about 15% slower than the current dat. In doing so our large range does infact find factors missed by previous programs and dats as well as find factors below and above.

There have been quite a few factors missed, nothing to get excited about but enough to keep Joe and I going and interested. Currently the worst thing to do would be recheck one of the old ranges using the current 1.5M<n<20M dat your chances of fiding a factor is better reserving a new range.

Welcome to sieve, there is alot to learn in this project if your interested and alot of question still remain.

Stromkarl
02-13-2005, 12:52 PM
Originally posted by vjs
<snip>
If people are interested in where you and others are Mike has a fantastic page...

http://www.aooq73.dsl.pipex.com/gaps/Gaps_n3_20_ps0u.htm
The above is a link shows all the gaps..
<snip>

I have gone to this site and it doesn't show anything for 678000-678500 yet. How often is it updated? Does it include all 78 factors I have submitted already from factexcl.txt and 1 from factrange.txt that was below 1T and the 1 I have submitted from fact.txt?

I have also found 5 factors above 20T. I will submit them to the factrange[at]yahoo[dot]com email address when the range is done, if Joe_O and vjs still want them.

Stromkarl

vjs
02-13-2005, 01:16 PM
Try these pages for more a more detailed look at the gaps...

http://www.aooq73.dsl.pipex.com/gaps/Gaps_n3_20_p04u.htm


I have also found 5 factors above 20T.

Are you sieving in that range? or how did you find them. Joe, myself and a few others are already working on all ranges less than 50T.

Please e-mail the factors but if they are within the range of 20-23T it's already been sieved and we have found quite a few between 1M<n<20M.

If anyone is planning on resieving any range please check with us first we have a better system and my have done the range already... Thanks.

maddog1
02-13-2005, 02:39 PM
VJS, I think Stromkarl means he has found 5 factors with n>20 million in his factrange.txt :D

vjs
02-13-2005, 03:21 PM
Ahhh...

20M 20T are totally different I hate these k,K,M,m,N,n,p,P,T,G,Y people including myself get them confused.

Yes if you found factors for numbers like


678XXXXXXXXXXXXXXXX | 67097*2^ 21,156,785+1


Yes we still need them and send them to factrange yahoo com thanks for helping out.

we also need factors like

678XXXXXXXXXXXXXXXX | 24737*2^991+1

as well so just send your entire factrange.txt file.

Stromkarl
02-13-2005, 07:53 PM
Sorry, brain lapse there. It is indeed 20M, not 20T. I also found one around 186k also. I submitted that one to SOB, but I will submit it to you also.

Stromkarl

Grafux
03-04-2005, 01:37 PM
I have to stop seiving. I am not sure how well I did, but will post my results here.

file name: SoB.bat <contents below>
./NbeGon_010_osx -s=SoB.dat -f=SoB.del -d=1.42 -p=630688573751303-631000000000000

file name: SoB.del <contents below>
4758824368237 | 22699*2^2421118+1

I am assuming that I did not finish my range. Please assign the remainder to another siever.

thxbai

vjs
03-29-2005, 10:25 AM
ceselb,

In the future could you *, +, >, or note in the main thread those ranges reserved with a 991<n<50M dat please.

740000-742000 Nuri (991<n<50M dat) was omitted and it's possible that I will eventually I'll miss one of these reservations.

Examples


747000-761600 engracio (on 991>50m sob.dat)
738500-739000 Silverfish (with 991-50M dat - ETA:End of April)

Also do you have a working copy of chucks new sieve program for testing or has this effort entirely been dropped. I havn't heard anything from him this year!!!

I'd like to test it against our dat and dat's for missed factors etc. I could use the most recent build for either the 32 or 64-bit clients... if there is a difference.

If you don't want to give it out would you test a small range for me?

P.S. Sorry for messing up the thread I just want to make sure you see this post.