PDA

View Full Version : Sieve Coordination Thread [old]



jjjjL
01-18-2003, 10:59 PM
Hi guys. The public sieve is... well... public :):

http://www.seventeenorbust.com/sieve/

Most of the info you need to get started is there.

The community should start at p = 25 billion. I'm gonna finish the range below that myself.

I'd recommend 25 billion wide p ranges since the n range is so wide (n = 3 - 20 million).

i. e.

first person: 25 - 50 billion
next person: 50 - 75 billion
next person: 75 - 100 billion
etc...

Don't let what you experience during the k=4847 sieving decieve you... sieving 17 million n-wide takes a lot longer than 1 million.

That said, it won't be 17 times slower. Paul Jobling managed to make SoBSieve several times faster than NewPGen was. This implementation is even faster than the private 2.71 beta that a few of you have.

If 25 billion wide don't make sense, change it but don't do a low p range unless you're SURE you can do it. It is very important that we get to a decent sieve level before the server starts assigning n-values above n=3 million.

If you are using a slow computer or are new and unsure if you want to do sieving, please reserve a range above 1 trillion for now.

In the future, I hope to make something on the sieve page to reserve ranges without using this forum but for now please coordinate using this thread exclusively.

One last reminder: Make sure you're logged into the SB site if you want to have it remember that you submitted the results (for future stats purposes). The sieve submission is really simple. That site again is:

http://www.seventeenorbust.com/sieve/

There will be an anouncement on the site about it tomorrow. Email me if you run into any problems.

Happy sieving folks. This should be interesting! :)

-Louie

ceselb
01-19-2003, 12:25 AM
Yes, finally. :)

I'm taking 25 - 50 billion.

kmd
01-19-2003, 12:52 AM
I'll do 50-75 billion.

I looked, and there was an option for alpha setting. It said it can be used to optimize performance by doing calculations differently. Is the optimum alpha random from machine to machine, or does it have to do with what specs you have?

Also, how long does it take to finish one range? It's not like I have a slow computer, I'm more wondering how big of a commitment i'm making here.

ceselb
01-19-2003, 02:59 AM
It's BIG. Something like 45 days on my P-IV 1.5Ghz. I'm adding another (slower) box tomorrow, but it'll still take over a month.
Will we reach 3M before that?

cjohnsto
01-19-2003, 03:44 AM
As candidate prime numbers are removed from the large datafile how are you going to keep the clients up to date. Will you say once a week give a new datafile to download or a patch file or Will you not care that some time is being wasted looking for factors in numbers where one has already been discovered?
My preference is patch files as well as a weekly released updated datafile. Or even better would be an automated system where the seiver automatically sends results and removes candidates from the local file.

cjohnsto
01-19-2003, 03:50 AM
I'll take 75-100 billion

Nuri
01-19-2003, 04:23 AM
I'll take 100-125 billion.

By the way, what does "badass" mean? (From "Sieve Result Submission" page)

Nuri
01-19-2003, 04:55 AM
Originally posted by ceselb
It's BIG. Something like 45 days on my P-IV 1.5Ghz. I'm adding another (slower) box tomorrow, but it'll still take over a month.
Will we reach 3M before that?

Does the "Rate" (p/sec) decrease as we sieve further, or is it only me? I have started only half an hour ago, but it has already dropped down by 3%. So, if that is the case for everybody, it might take much longer than 45 days. :(

I guess we will start testing 3M+ towards the end of february (unless we find a "couple of" primes before that, which will decrease the number of k's)


Originally posted by kmd
I looked, and there was an option for alpha setting. It said it can be used to optimize performance by doing calculations differently. Is the optimum alpha random from machine to machine, or does it have to do with what specs you have?

And, does anyone know the answer to that question? Is it fine as it is? Or should we try to optimize it ourselves?

Thanks.

McBryce
01-19-2003, 05:24 AM
I'll take 125 - 150 billion.

smh
01-19-2003, 06:51 AM
Unfortunately a 25G range is way to large for me, it'll take almost a month on my P4 2.4GHz, and the only machine i have which can run 24/7 is a slower PIII 450.

I think it's a good idea to make the ranges a bit smaller above, lets say 500G. I think it will be hard to get a few people who want to commit 2 Ghz months just to sieve one range. Maybe 10G ranges above 500G and 5G ranges above 1T?

If it's possible to get a reservation system on the website, it even be possible pick 1 or a couple of free 1G ranges?

Currentely there are over 825000 numbers left, but i guess the first few days quite a lot of numbers will be removed. So the SoB.dat file should be updated every now and then. My guess is that 650000 numbers will be left arount 1,5T.

Just for testing, i sieved 250.000.000.000 to 250.020.000.000 and removed 4 numbers. Since i've no way of finishing a 25G range in a week (i'll be flying to south east asia for almost a month next week :|party|: ) i'll stop it here and submit the factors.

FatPhil
01-19-2003, 07:15 AM
The cost of testing a prime is almost certainly
time(generating the next prime)
+#bigsteps * time(bigstep)
+#babysteps * time(babystep) * #ks

The initial term, sieving for p, should be negligible, and shouldn't make too much difference. I'm sure Paul's sieve uses a n.loglog(n) algorithm, so it will slow down slightly as the p range increases, but there will be fewer primes to test, so the total time to do a p range should remain flat. (In my own sieves they tend to decrease.)

The big steps are modular multiplications and writing entries into some kind of hashing structure. These are quite slow.

The baby steps are either simple halvings or doublings, and lookups into the previously created hash table. These are very fast, assuming you can stay within the cache. However, you can't! Nonetheless they're still quite quick.

Both of the above depend on the exact machine architecture being used. However, one that's faster for the big steps will almost certainly also be faster for the baby steps too. The exact ratio may vary though.

To perform a discrete log (which is what fixed-k sieving is), the product of the number of big steps and baby steps must exceed the n range.

So all in all the cost is
C0 + C1*B + C2/B
Where B is the cost of the big step, and C0,C1,C2 are constants.
Now while this appears to shoot off to infinity, is does have quite a broad area where it will be quite flat.
For example, some times for my own particular discrete log with various alphas:

0.6 1m54.690s
0.8 1m51.680s
1.0 1m49.290s
1.2 1m49.300s
1.4 1m48.690s
1.6 1m49.040s
1.8 1m50.010s
2.0 1m50.680s
2.2 1m51.660s
2.4 1m51.730s
2.6 1m54.270s
2.8 1m55.940s
3.0 1m56.720s
3.2 1m58.240s
3.4 1m59.830s

Now while it might appear obvious the 1.4-1.5 would be best for the above run, times of _identical_ tests across multiple runs can vary by up to about 2% or more, much more than the half-percent separating the times above. I get the feeling that it's more sensitive to the state of the cache than it is to the alpha parameter. (The jump between 2.4's 1:51 and 2.6's 1:54 is an example of this difference, I'm sure.)

So, in summary - don't worry to much about the exact value of alpha, as long as you're within a range of the theoretically perfect value your rate won't vary hugely.

Phil

nuutti
01-19-2003, 08:01 AM
I will take 150 - 155 (small range)

Yours,

Nuutti

paul.jobling
01-19-2003, 12:57 PM
Hi guys,

About that "alpha" value.... Louis said it might cause trouble :)

It is basically a tuning parameter for the algorithm that is used. Different numbers of k and different ranges of n require different alpha values. For the Sierpinksi work where there are 12 k values and the n range is 3 to 20 million, I found that a value of 3 gave the best performance on my PC, though your mileage may vary.

To get the best rate, just look at the 'p' rate and try to maximise that by changing alpha. You can quite happily set an alpha value, start the sieve, leave it for a few minutes, stop it, then repeat the process, as the software keeps track of where it has got to and knows the range that is being sieved.


The original algorithm didn't have that parameter, as those of you who have used NewPGen will know. However while I was attempting to get the very best performance out of this software I found that this was somethimng that was required.

Regards, and happy sieving,

Paul.

MikeH
01-19-2003, 01:12 PM
I'll take 155-175 billion.

Thanks,
Mike.

P.S. Thanks for all the hard work Louie et al, it really is appreciated.
:cheers:

paul.jobling
01-19-2003, 01:17 PM
Hi guys,

I'll take 155 billion to 175 billion. That should take this XP 2100+ about 20 days, with alpha set to 3.2.

NOTE Using the default value of alpha (0.5) is not recommended - my rate was ~7000 p/sec at that; with alpha set to 3.2 the rate is ~11200 p/sec.

Regards,

Paul.

paul.jobling
01-19-2003, 01:18 PM
MikeH beat me to it! OK, I'll take 175 to 200 million (quickly presses Submit before that goes too....)

dudlio
01-19-2003, 01:22 PM
Race conditions...yeouch. I'll take 200-210 billion then.

MAD-ness
01-19-2003, 01:29 PM
Well, it looks like Paul just offered up the requested performance/completion time estimate that people wanted. :)

RangerX
01-19-2003, 01:30 PM
I'll take ***-*** billion. Why not; I leave my comp on all the time. A 1 gHz will probably take a decent time to finish just that small range though (especially with both sb and sbsieve running :D).

EDIT: I changed my range so I blanked out the one above to drop confusion.

McBryce
01-19-2003, 01:34 PM
Hi,

I have to change my reserved range from 125-150 billion to 125-130 billion.

So 130-150 billion is free to reserve!

Martin

RangerX
01-19-2003, 01:37 PM
I'll take 130-150 billion instead :D

dudlio
01-19-2003, 01:38 PM
In terms of timing, figure 1 billion = 1 day. Roughly. If you want a more accurate estimate, download the client, do some timings and then reserve. My 10G is going to take two weeks... :/

smh
01-19-2003, 01:49 PM
And since we're into smaller ranges, i'll take 210-215G

so far:


0 - 25G Louie
25 - 50 ceselb
50 - 75 kmd
75 - 100 cjohnsto
100 - 125 Nuri
125 - 130 McBryce
130 - 150 RangerX
150 - 155 nuutti
155 - 175 MikeH
175 - 200 paul.jobling
200 - 210 dudlio
210 - 215 smh

RangerX
01-19-2003, 03:16 PM
How many of the # | #*2^#+1 things are we expected to get? I've got 6 right now and I haven't even broke 130.1 billion yet...

EDIT: Also, will the list be saved in between program runnings? Or should I submit when I close the program?

ceselb
01-19-2003, 03:25 PM
Originally posted by RangerX
How many of the # | #*2^#+1 things are we expected to get? I've got 6 right now and I haven't even broke 130.1 billion yet...

Fewer and fewer the further up we go. I'm getting lots, but my range is lower than yours.


EDIT: Also, will the list be saved in between program runnings? Or should I submit when I close the program?

Progress is saved in the SoBStatus.dat file. When you submit them, be sure to remove the "pmin=x" and "pmax=x" lines, as the script can't handle them.

dmbrubac
01-19-2003, 04:08 PM
I'll take 215-225

RangerX
01-19-2003, 06:03 PM
Seriously tinker with the alpha values. I just about doubled my original speed.

Nuri
01-19-2003, 06:11 PM
Can anyone help with this please?

After running SoBSieve for 12 hours, I had to stop it and restart my PC. When I restarted, I saw that all of the factors in the SoBStatus.dat file was lost (although p started from where it left).

The problem is not so serious for that case, just 12 hours of work were lost. I simply installed the client to another folder and restarted testing. But of course, I don't want to experience that once more.

In any case, I decided to take occasional backups of the SoBStatus.dat file and submit factors more frequently. That way, I won't have to start from scratch everytime.


The questions are:

What do you think? Is it me that did something wrong, or may it be because of the client? Any ideas?


Louie: Is it possible to rearrange the Factor Verification Results page so that is shows previously submitted factors (of the user) as well? (if it would be very difficult to show all of the factors submitted, something like the last 10 factors will still help a lot).

eatmadustch
01-19-2003, 06:36 PM
sorry for this rather embarassing question ... if I were to take say 225-230 that would mean I would have to enter 225000000000 (9 zeros) as pmin and 230000000000 as pmax?

How long would this take on a PIV 2.53 GHz with 512MB RDRAM?

NOTE: I'm NOT reserving this range, this is right now just a question of curiosity. I'll reserve a range later, when I know how long this would take, so I know how bigger range to reserve!

Also, does this test just for 4847 or all of the remaining 12? I thought Louie already sieved up to 10T for all but 4847?!

ceselb
01-19-2003, 07:07 PM
Originally posted by eatmadustch
that would mean I would have to enter 225000000000 (9 zeros) as pmin and 230000000000 as pmax?

Yes.

How long would this take on a PIV 2.53 GHz with 512MB RDRAM?


A fair guess is 4-6 days, give or take... (ram size and speed isn't a factor, afaik)


Also, does this test just for 4847 or all of the remaining 12?

All 12.

I thought Louie already sieved up to 10T for all but 4847?!
Yes, n= 2 - 3M was sieved quite far. This however is a sieve for n= 3 - 20M. SoB will reach 3M in a little over a month or so.

EggmanEEA
01-19-2003, 11:25 PM
Since I have a bunch of other cpu's chewing on SoB, I'll run my own cpu (p4 2ghz 1gb DDRRAM) on the following range:

225-250 Billion

If I can get a few more cpu's set up soon, i'll put a few more on sieving as well. With the flood of new users lately, we're going to need to feed them a lot of new numbers.

Regards,
EggmanEEA,
TeamRetro

jjjjL
01-20-2003, 02:18 AM
I think the estimates of 1 month before n=3million values start getting assigned is a good one. I would say that our goal should be to sieve to at least p=1T before the first tests start going out. When we get close to the day when the first tests with n > 3 mill are about to be assigned, I may temporarily decrease the expiration time back to 5 days so we can get as much sieving as possible done before the values start going out.

I will release a new sieve file once I finish my range 0 - 25 billion. It will probably be a few weeks still. At that point, I'll create a script to generate the sieve file directly from our database so that people's submissions can all be used to make the search space smaller as the sieve progresses. It will probably be something that gets automatically rebuilt on a regular basis (daily I'm thinking) and uploaded somewhere for people to grab.

As things progress, the sieve file will not shrink too much and updating the file constantly will become less and less important. The big shrinks will come when k values get eliminated. :D And the decrease will likely be larger than 10% since statistically, the k values with higher weight (read: more values to sieve) are more likely to have lower primes. For instance, if we find a prime for k = 55459, then ~15% of the numbers being tested would be elminated.

BTW, I just reconfigured the verification script to run at a niced priority. It may run a little slower now. Shouldn't effect anyone besides me since I'm submitting huge blocks right now. For instance, a block of 4000 valid factors takes about 2 minutes to verify. 1 minute to do the division and another to do the mysql queries. I could optimize it, but I don't think it's nessisary. The average user should not be submitting 4000 factors at a time. ;) In fact, I will probably raise the lower submission limit to 25 billion once I'm done with my range.

If for some reason you do have that many factors to submit (or more!) you should be aware that your browser might not like you using that much of the text box. For instance, in Opera, it will let me put more than 3000 lines in, but if I do much over 5000 at a time, it will not send anything when I submit (even though it lets me put it all in the submission window.) I'm pretty sure it's a browser issue. Is this going to effect anyone besides me? If so, let me know and I'll look into it deeper and make sure I can't fix it somehow.

One last thought I just had. If it's manageable, I think I'll post a dynamic list of the factors above a certain bound somewhere. That way everyone can go though and check for "holes" in the sieve and patch them up. Let me know what you guys think.

That's all I can think of for now. Submissions seem to be rolling in so it looks like everything is working well.

Happy sieving folks! :)

-Louie

paul.jobling
01-20-2003, 04:29 AM
After running SoBSieve for 12 hours, I had to stop it and restart my PC. When I restarted, I saw that all of the factors in the SoBStatus.dat file was lost (although p started from where it left).

...

What do you think? Is it me that did something wrong, or may it be because of the client? Any ideas?


OK, I know why this happened. It is because you had checked "Create new SoB.dat file". So PLEASE PLEASE everybody - make sure that this is unchecked before you stop the program.

paul.jobling
01-20-2003, 06:41 AM
I attach an upgrade to the software. This does two things:

(1) The "Create new SoB.dat file" option has been removed.

(2) Alpha is set to 3 by default.

To upgrade:

- First go to Options, and MAKE SURE THAT "Create new SoB.dat file" is UNCHECKED. If this is checked, it will delete the SoBStatus.dat file when you stop the program *

- Exit the software

- Copy this new software and run it.

Regards,

Paul.

* This was designed behaviour - instead of producing the SoBStatus.dat file listing the removed values, instead it writes out a new SoB.dat file. However while that is useful for myself (for testing) or Louis (to produce the file in the first place), it is not useful to you.

ceselb
01-20-2003, 07:02 AM
I downloaded a copy, but there was already 1 download by then.
Didn't install it though, so no problem. :D

Nuri
01-20-2003, 02:35 PM
Thanks for the info Paul. :thumbs:

I remember having checked it thinking that it should be done to maximize output (!) by removing k,n pairs that have a factor from the SoB.dat file. :D

Of course, I didn't know it restarts the SoBStatus.dat from scratch. :scared:

Anyway, may be it's a good think that it happened that early.

frmky
01-20-2003, 02:40 PM
I'll take 250-255 billion.

Greg

jjjjL
01-20-2003, 03:42 PM
I just updated the download with the new version of the siever.

There is also a slightly newer sieve file too. Download it.

http://www.seventeenorbust.com/sieve/

-Louie

RangerX
01-20-2003, 05:31 PM
How often should we submit? Is there any particular moment, like before we reboot?

Also, if we accidently submit data that we've already submitted, does the upload script check to see if it's already in the database before adding it?

I'm asking because I haven't submitted anything yet, but by how long it's taken me just to get to 130.26 million, I'll probably be needing a reset before it's done and I want to make sure I don't lose anything.

EDIT: Also, I really don' tknow that much about what's going on and even less about the number theory behind this. I just chip in my processor because it's not doing anything anyway and I think math is a good cause. But my question is, is this sieving being done for it's own purpose, or is it going to help out with finding primes for the sb program? Also, what are all the k, n, and p values that keep getting mentioned?

Finally, I've been running this thing for about a day now and I'm at 130.26 million, so at this projected rate I probably won't get up to 150 million by the end of the month. So should I release from 140-150 million, or should I maybe stop running sb for a little bit (after the next data set is finished) and let sieve take up my full idle processor?

Alien88
01-20-2003, 06:50 PM
im taking 255-260 billion.

Nuri
01-20-2003, 07:04 PM
RangerX,

(Louie and Paul: Sorry if I am wrong in any of the comments below.)

You can submit as frequent as you like. There is no problem in submitting the numbers more than once. The server understands it. Simply copy all of the factors in the window of the client (or from the SoBSieve.dat file, but be careful to exclude pmin and pmax rows in that case) and paste to the submission window, and press the submit button.

Important: Do not forget to log in before you submit. Otherwise the database will not know it's you submitting (it will record the numbers anyway, but "you" will not get credit for it in the future. This might be an important problem if you care a lot for stats).

If you feel unsecure at the beginning, simply copy the contents of the folder to another folder before you reboot your computer. (I evern paste my factors to an excel file :D )

I accidentially (!) checked the "Create new SoB.dat file" option and closed the program in the previous version, which resulted in loss of my factors (discussed above). I think there is no such risk if you did/will not check it, or better than that are using version 1.07.

On releasing 140-150 range, I think you should wait for "a day or two" before doing that. I am not really sure you are significantly slower. We'll see how fast everybody is doing by that time as everybody shares their progress.

By the way, the highest p range that any of us reserved is still just 260 billion. So, it is too early to feel uncomfortable.

Of course, it is still important that you commit your computer time 7/24 (or close to that) to meet the deadline.

Q: What is your processor?

Regards,

Nuri

RangerX
01-20-2003, 07:15 PM
My processor is a 1 gHz Athlon Thunderbird (the ones with the 512k cache I think). I'll go ahead and temporarily kill the sb program after it's done with the current data set to speed up the sieve (should be about a ratio of 2x faster since currently they're both taking up about the same processor percentage).

EDIT: I haven't checked the "create new .dat file", but before I upgrade to 1.07 I'll make sure to copy all the factors into a text file and write down the sieve's p progress so that I don't waste any time.

dmbrubac
01-20-2003, 08:41 PM
I've been running for about 28 hours now and my calculations are that my 10 billion (I have 215-225) will take 33 days. If this is too long, let me know. Maybe somebody wants 5 billion.

It's running on a 1.1 G P3, but only getting 50% CPU. I could shut down the other process, but I would rather not.

Let me know

Dave

Nuri
01-20-2003, 09:06 PM
Let me give my status report as well.

I took 100-125.

Currently I am running 100-120 at home and is at 100.65. Found 218 factors so far.

I am also running 120-125 range at work and is at 120.x. Found a couple of factors there too.

kmd
01-20-2003, 09:12 PM
My check-in. Working on 50-75 billion. currently at 51.3 billion. 845 factors found thus far. A very rough estimate has me finishing in about 20 days.

jjjjL
01-20-2003, 09:35 PM
Looking good.

I'm at 6.6 billion now.

Here is data of what has been submitted so far

u-id # of factors submitted
------ ---------
NULL 12
1 73958
105 4
212 182
365 1034
366 4
1418 114
1577 241
1608 27
1761 800
1831 4

If you're curious what your userid is, get it from your personal stats page URL.

i.e. when you lookup my stats, the url is

http://www.seventeenorbust.com/stats/users/user.mhtml?userID=1

because my userid is 1.

As you can see, I've submitted a few factors ;). The low range is pretty rich in factors.

Also, it looks like someone submitted 12 factors w/o being logged in. Oh well. It's not a huge deal.

Also, don't think that I'm expecting you to submit factors all the time. I'm just showing you what's been done so far.

Keep up the good work everyone.

-Louie

Nuri
01-20-2003, 10:47 PM
Originally posted by jjjjL

Also, it looks like someone submitted 12 factors w/o being logged in. Oh well. It's not a huge deal.


I suspect it might be me. :rolleyes:

In fact, since we're not to many people here, and it's known who is working at what range, it should be very easy to understand simply by looking at the p values of the factors.

RangerX
01-21-2003, 12:39 AM
I've found 64 so far but haven't submitted yet. And I just rolled over to 130.3, and it's been at least 24 hours since I started. Probably more, but, sticking with liberal estimates, I'm looking to finish in about... ewww... this is nasty... 66.667 days... Yea that should cut down a lot when I stop sb.exe but I may still have to release 5 billion just to make the month deadline.

EDIT: It's too bad I can't get my video card working on one of these problems... I think it's processor can match my main one as far as math speed goes... Yea I know it's sad.

Paperboy
01-21-2003, 01:13 AM
I wasn't sure where everyone was at so I took 300 to 301 billion. I just wanted to try out the program and based on other peoples times I don't want to take a bigger chunk.

olaright
01-21-2003, 04:19 AM
So far:

0 - 25G Louie
25 - 50 ceselb
50 - 75 kmd
75 - 100 cjohnsto
100 - 125 Nuri
125 - 130 McBryce
130 - 150 RangerX
150 - 155 nuutti
155 - 175 MikeH
175 - 200 paul.jobling
200 - 210 dudlio
210 - 215 smh
215 - 225 dmbrubac
225 - 250 EggmanEEA
250 - 255 frmky
255 - 260 Alien88

300-301 Paperboy

FatPhil
01-21-2003, 10:27 AM
Originally posted by paul.jobling
Hi guys,

I'll take 155 billion to 175 billion. That should take this XP 2100+ about 20 days, with alpha set to 3.2.

NOTE Using the default value of alpha (0.5) is not recommended - my rate was ~7000 p/sec at that; with alpha set to 3.2 the rate is ~11200 p/sec.

Regards,

Paul.


Can anyone else supply p rates for various machines.
I've just tried the following on a Duron 900 using WiNE on Linux, with alpha=3:
p=4.500-4.501G 2500p/s
p=50.000-50.001G 2300p/s

This looks too slow by a factor of 2 or so.
Is anyone else using WiNE on Linux, or using a Duron of similar speed?

Phil

ceselb
01-21-2003, 11:45 AM
My rates (at alpha=3.2):
PIV 1.5Ghz P=26.34G Rate: 6733 p/sec
PII 350Mhz P=40.16G Rate: 2362 p/sec

Ranges done 25-26.34G and 40-40.16G.
Projected time left to run: 32.5 days at full speed.
1885 factors submittted (I'm id 365 if anyone wonders).

frmky
01-21-2003, 12:03 PM
Originally posted by FatPhil
Can anyone else supply p rates for various machines.
I've just tried the following on a Duron 900 using WiNE on Linux, with alpha=3:
p=4.500-4.501G 2500p/s
p=50.000-50.001G 2300p/s

This looks too slow by a factor of 2 or so.
Is anyone else using WiNE on Linux, or using a Duron of similar speed?

Yes, that definitely sounds too slow. My Athon XP 1800+ (1.533 GHz) clocks in at 11855p/s at p=251G and alpha=3.

Greg

FatPhil
01-21-2003, 12:32 PM
Originally posted by frmky
Yes, that definitely sounds too slow. My Athon XP 1800+ (1.533 GHz) clocks in at 11855p/s at p=251G and alpha=3.

Greg

OK, perhaps Paul's program is I/O bound, and that the XPs have larger and/or faster caches than the Durons, and that I'm getting completely constipated.
I just tried on a Duron1300 running NT, and it wasn't much faster. :-( .

I've written my own sieve, you see, and I'm trying to compare speeds. On my machines, my own sieve is ~4 times faster than Paul's. However, it appears that Paul's is running at about 1/3rd speed on my machines compared with how it should run (weird, I've not had problems with NewPGen in the past).

I'll be giving windows (DOS box) and Linux binaries to Louie later this afternoon, and it would be nice if a couple of people could do a speed comparison.

If there are people out there with workstations (non-x86) then I can probably build binaries for those as well, as it's all C, no assembly language.

Ooh - final test complete, everything works (4.2* faster) - I'll ship it to Louie right now!

Phil

Pascal
01-21-2003, 01:13 PM
Indeed it would really be nice to have a Linux client. I'm also interested in testing this and may compile it with a given makefile under SuSE Linux 8.1.

Just send me the file or post a download link. ;)

FatPhil
01-21-2003, 01:30 PM
Originally posted by Pascal
Indeed it would really be nice to have a Linux client. I'm also interested in testing this and may compile it with a given makefile under SuSE Linux 8.1.

Just send me the file or post a download link. ;)


http://fatphil.org/maths/sierpinski/index.html

Windows and Linux binaries. Tested under NT, WiNE emulation, and Debian Linux, in the obvious ways.

Pascal - do you mind being the first guinea pig?
(It's basically only been tested by me on 'toy' ranges. The last one was to mimic a slice from Paul's 155-175G, and for that my Duron 900 was clocking 9900p/s under linux, obviously. Note - I've not worked out what the dimension parameter should be yet, I just use the default (~1.3) and it works fine.)

Phil

ltd
01-21-2003, 02:22 PM
Hi,

i will take 260 to 275.

Lars

ceselb
01-21-2003, 03:14 PM
Wow, NbeGon seems to be quite nice. :thumbs:
According to my calculations I jumped from ~6800 to 9400 p/sec (PIV 1.5). :D

smh
01-21-2003, 05:11 PM
Some timings on my P4@2,4GHz for SoBSieve 1.07

Alpha - rate
2 - 15.6K
2.5 - 15.6K
2.7 - 15.7K
2.8 - 15.75K
2.9 - 15.75K
3.0 - 15.7K
3.1 - 13.7K
3.2 - 12.8K

NbeGone64 and SoBsieve have about an equal speed when using default Alpha. When i use -d=2 or higher, SoB sieve is about 20% faster!

Cowering
01-21-2003, 05:34 PM
Originally posted by FatPhil
http://fatphil.org/maths/sierpinski/index.html

Windows and Linux binaries. Tested under NT, WiNE emulation, and Debian Linux, in the obvious ways.


Phil


Phil, could I get a compile done for UltraSparcII?

I have 2 4cpu smp boxes that can't run anything SoB otherwise. I have no clue if your code sets affinity in SMP mode, I guess I can figure that out if need be.

Thanks

FatPhil
01-21-2003, 05:37 PM
Originally posted by smh
Some timings on my P4@2,4GHz for SoBSieve 1.07

NbeGone64 and SoBsieve have about an equal speed when using default Alpha. When i use -d=2 or higher, SoB sieve is about 20% faster!


I tested at d=1.0 (the lowest it can go), 1.3, and 1.6.
1.0 and 1.6 were about the same speed, and 1.3 was a bit faster, so I'm guessing that on my architecture (Duron) 1.3 (the default) is pretty much optimal.

I have no idea where the optimal point is for PIIs, PIIIs or P4s, however, as my hash and cache behaviour are unrelated to Paul's, there will probably be no correlation at all. It's obvious mine favours different architectures from Paul's. This is good, as it means that we both fill in each other's weak spots, and noone's left with a slow sieve. I hope.

Phil

FatPhil
01-21-2003, 05:46 PM
Originally posted by Cowering
Phil, could I get a compile done for UltraSparcII?

I have 2 4cpu smp boxes that can't run anything SoB otherwise. I have no clue if your code sets affinity in SMP mode, I guess I can figure that out if need be.

Thanks

Can you (even briefly) get me a login account on the usparcs? I can SSH authenticate.
I've never tried to build my maths libraries on a usparc, so this could be an intersting experiment. The thing's more than a little chaotic, and requires deep magic to build (it's a drag, but is not impossible, hard coded paths and crap like that).
I'll try to sneak into the local university tomorrow, and see if I can get access to a machine myself.

Regarding affinity - I wouldn't know how to set it!

Phil

Cowering
01-21-2003, 05:55 PM
Originally posted by FatPhil
Can you (even briefly) get me a login account on the usparcs? I can SSH authenticate.
I've never tried to build my maths libraries on a usparc, so this could be an intersting experiment. The thing's more than a little chaotic, and requires deep magic to build (it's a drag, but is not impossible, hard coded paths and crap like that).
I'll try to sneak into the local university tomorrow, and see if I can get access to a machine myself.

Regarding affinity - I wouldn't know how to set it!

Phil

I will try, but these machines are behind the firewall from hell, and do nothing internet related all day, they are internal servers only. Might not even have telnet/sshd installed.. but i'll do my best

MikeH
01-21-2003, 06:00 PM
Posted by ceselb
According to my calculations I jumped from ~6800 to 9400 p/sec (PIV 1.5).
I too have just done a few quick tests with NbeGon64 v0.06sob

P4-1.7
SoBSieve ~7Kp/s
NbeGon64 ~11Kp/s

AMD XP 2100+
SoBSieve ~12Kp/s
NbeGon64 ~27Kp/s

These were using Paul's suggested alpha=3.2, and Phil's alpha left as defaut.

Paul, it looks like you have some competition!

Phil, NbeGon64 (as well as being faster) solves my problem (1) (other thread). Any chance you could give an option to pick up where you left off? Currently, unless I do some pre-processing to get the last p value, every time I start NbeGon64 I'll do the same range over and over again.

Thanks,
Mike.

FatPhil
01-21-2003, 06:57 PM
Originally posted by MikeH
Phil, NbeGon64 (as well as being faster) solves my problem (1) (other thread). Any chance you could give an option to pick up where you left off? Currently, unless I do some pre-processing to get the last p value, every time I start NbeGon64 I'll do the same range over and over again.

Thanks,
Mike. [/B]


The quickest hack would be to drop a shell file with the command line you'd use to resume it; every half hour perhaps? In the windows version it would be a batch file, of course.
A ^C handler would be nice - I can do that for us Loonies, but don't know if windoze supports signals the same way.

It's late. I'll attack it with a fresh mind tomorrow.

Phil

RangerX
01-21-2003, 07:28 PM
Has your sieve been tested for accuracy Phil? I just want to know because I could run it for a little bit from the beginning of my range and see if it pops up the same things the sobsieve did. IE, I'm perfectly willing to run a test for ya :) I would offer to help with/make a shell program for continuing number ranges and non-such but I've got my plate full with school work right now, so sorry :( But perhaps it would be easier to simply modify SoBsieve's algorithm to yours and leave it's shell intact (as much as possible, anyway)?

RangerX
01-21-2003, 08:00 PM
PS. I tested the program on my comp, it does run much much faster than SoBSieve, so I would be really happy to see it's speed somehow meshed with the current SoBSieve. It finished 1/5 of what my computer has done so far in like at least 1/24th the time (adjusting for the fact that it took all my system resources instead of the half SoBSieve is getting). Mostly the only thing I have a problem with is that I use my computer a lot for various things, and 1) XP lags running DOS (or mine does anyway; I'm convinced it's possessed by a demon though) and 2) The new program doesn't have any way to set it to use only idle resources. Not criticizing or anything; God knows it's better than anything I could throw together!! Just suggesting a solution. So yea I'll just shut up now :D

PSS. I checked the output with the one SoBSieve spat out and it's exactly the same (as I expected, but I figured a little extra checking never hurt).

Alien88
01-21-2003, 10:20 PM
to set the process priority, go to task manager and right click on the process then set the priority to what you want. by default it'll run at normal priority.

frmky
01-21-2003, 10:37 PM
I compared the speed of the 2 sieves on three Athlons and the timings surprised me:

Athlon XP 1800+ (1.533 GHz), Windows XP
SOBSieve 11890 p/s (alpha=3)
NbeGon64 24485 p/s (d=1) 2x faster

Athlon TBird 1.266 GHz, Windows 2000
SOBSieve 8600 p/s (alpha=3)
NbeGon64 20780 p/s (d=1) 2.4x faster

Athlon (K7-5 Argon) 750MHz, Windows 98
SOBSieve 3075 p/s (alpha=3)
NbeGon64 9805 p/s (d=1) 3.2x faster

Looks like I'll be switching software. Anyone wanna wrap a GUI around NbeGon64? :-)

Greg

FatPhil
01-22-2003, 04:48 AM
Originally posted by RangerX
2) The new program doesn't have any way to set it to use only idle resources.
...
PSS. I checked the output with the one SoBSieve spat out and it's exactly the same (as I expected, but I figured a little extra checking never hurt).


Doesn't XP have Task Manager to set priorities?
It's a single process single thread, just set it to idle.

I don't generally like the idea of processes setting their own priorities, that's the user's job. I know that different times I run the same program I'll want it to run at a different priority. i.e. priority is not a function of the executable, but of the task at hand. However, I'm used to the fine control that nice/renice give one in Unix.

Thanks for redoing your range. I did have 2 known small SoBSieve outputs to test against, but more tests is better.

Phil

FatPhil
01-22-2003, 07:31 AM
I've had a productive morning building NbeGon for the some new OSes and Architectures. Currently available are the following:

Linux/x86
Windows/x86
FreeBSD/x86
OpenBSD/x86
SunOS/Sparc
Linux/Alpha

http://fatphil.org/maths/sierpinski/

I've only tested the new non-86 non-Linux ones on the old 150-150.01G range that I tested others with, and it comes up with identical everything, so I think they're as trustworthy as the Linux/Windows ones.

It's the plain 0.06sob version, I've not added any features.
Any system with gcc should take 2 minutes for a new port, so just drop me a note if you want me to look at a new platform.
Long live portable C!

Phil

smh
01-22-2003, 10:25 AM
Just tested NbeGon64 on a PIII 450, and it's IIRC about 80% faster compared to SoBsieve:cheers:

RangerX
01-22-2003, 11:47 AM
Hehe... Yea it does someone else posted about that already. I forgot all about that!! Figures; for once Windows offers something that might be useful to the power-users and I forget all about it. *sigh*

And testing the range is no problem. I figured since I wanted to test it's comparitive speed, I may as well go high enough to check some of the factors too. I also forgot to mention that my test was with your default alpha (1.3, right?), which isn't even the best for my range and system.

After SB is done with this current test (6 blocks left) I'm gunna kill both the SoB programs and start running the NbeGon64. I should be all set to finish in a month then.

I have a quick question though... My computer has the bad tendency to freeze every once in a while, in which case I have absolutely no way to close stuff so I can safely save data. Does the NbeGon64 program output the factors into the file as it goes? If so then I don't have to worry about anything because I could just use the last p value outputted to the file to start over again. And does it overwrite the file when you start if over or append to it?

FatPhil
01-22-2003, 12:51 PM
Originally posted by RangerX
I have a quick question though... My computer has the bad tendency to freeze every once in a while, in which case I have absolutely no way to close stuff so I can safely save data. Does the NbeGon64 program output the factors into the file as it goes? If so then I don't have to worry about anything because I could just use the last p value outputted to the file to start over again. And does it overwrite the file when you start if over or append to it?

I fflush() my write to the factors file, so it's "out of my hands". The kernel is permitted to delay writes if it so desires. For these sparse ranges, I could open and close the factors file each time, but I don't hink there's any need for that at the moment.

Regarding resuming, I've also today added a self-resuming batch file setup. Basically I write a batch file with the correct command line every 10 minutes, so to restart just run that. Quite how Windows copes with writes to batch files that are being executed, I don't know. Please report back if there are problems. I intend to get most of the versions rebuilt by tomorrow, and they'll have the version number 0.07sob. I'll send a message to the forum when they're ready.

I also added a counter of the number of primes removed, which I think makes the otherwise boring progress markers more interesting.

Phil

MikeH
01-22-2003, 01:00 PM
Original post by FatPhil
The quickest hack would be to drop a shell file with the command line you'd use to resume it; every half hour perhaps? In the windows version it would be a batch file, of course.
My prime (no pun intended) interest right now is Windows.

What you sugget would be good. I was thinking along the lines of saving pmin and pmax values to a file every (say) 15 minutes. Then in the client give a command line option to specify pmin and pmax from a specified file.

With this type of set-up I could install NBeGon64 as a service under FireDaemon. It would also give me the advantage that I can monitor the progress remotely just by looking at the save file.

Thanks for your help Phil. :cheers:

MikeH
01-22-2003, 01:21 PM
Phil,

We crossed posts. You're already ahead of me.:smoking:

MikeH
01-22-2003, 02:37 PM
Phil,

Just to check the .bat save file will work as a service, I just tried the following line in a .bat file

nbegon64 -p170886000000-175000000000 -ssob.dat

Launching the .bat file with FireDaemon is a breeze. I look forward to tomorrow's version.

dmbrubac
01-22-2003, 03:05 PM
I just switched over to NBeGon and it is a LOT faster on my P3. My earlier estimate of 33 days for 10 billion should tighten up. After I've run it for a day I will let you know my rate.

I agree with other posters though, put the NBeGon math inside the SBSeive interface.

Nice work though, to both Phil and Paul.

Mystwalker
01-22-2003, 03:26 PM
I just tried out NbeGon before getting myself a claim to dig for composites. ;)

My question:
Are those outputs completely normal?

# 0.7 p=275000000543 hash overflows: 0|0|0|0|0|0
max num factors=1 at p=275000005999
# 8.7 p=275000066059 hash overflows: 6|0|0|0|0|0
! Info: required 2 rehashes for prime 275000120971
# 18.1 p=275000131613 hash overflows: 12|1|0|0|0|0

Overflow always sounds like "error" for me...

ceselb
01-22-2003, 03:37 PM
Below quoted from a mail from Phil (Bold is me).

Is the proportions of "hash overflows: 9388|154|1|1|0|0" in any way
related to dimension (and performance)?

Yes. The more 'big steps' in the DLOG, the more likely my hash table is to
overflow. If the big step stage is faster then I want to make it as large as
possible. However, if I overflow the hash table I need to that prime again
with fewer big steps. Therefore an overflowed prime costs at least one and a
half times a normal prime, time-wise. So the balance is makign the first
attempt small enough such that it's as fast as possible, but large enough
that it doesn't overflow too often. Overflowing 1 in 1000 primes is nothing.
Multiple overflows chose progressivly fatter rectangles with fewer big
steps, and incur a further cost.

The balance is such a compromise that the dependency of speed on the
dimension parameter is reduced. If you chose a parameter too small, then you
get faster DLOGs, but you overflow more often, so you have to redo more often.
Likewise the other way. So it's pretty flat performance-wise on the whole.

Does the "max num factors=2 at p=" lines mean anything important, or
should I just ignore them?

Ignore that. I should remove that printf, it was useful when debugging at
very small, but is irrelevant now.

Mystwalker
01-22-2003, 03:47 PM
Ah, ok, thx!
Then I'll take 275-280

So we should have now:

0 - 25G Louie
25 - 50 ceselb
50 - 75 kmd
75 - 100 cjohnsto
100 - 125 Nuri
125 - 130 McBryce
130 - 150 RangerX
150 - 155 nuutti
155 - 175 MikeH
175 - 200 paul.jobling
200 - 210 dudlio
210 - 215 smh
215 - 225 dmbrubac
225 - 250 EggmanEEA
250 - 255 frmky
255 - 260 Alien88
265 - 275 ltd
275 - 280 Mystwalker

300-301 Paperboy


Already any measures of what are good alpha factors?

ceselb
01-22-2003, 03:59 PM
Originally posted by Mystwalker
Already any measures of what are good alpha factors?

The defults are ok, I think. This varies over the ranges, so a quick test might be in place. Shouldn't be too far off though, so do as you want.

Zonar
01-22-2003, 04:01 PM
I was playing around with the programs, and it seems that Nbegon64 is more than twice as fast on my xp 2000+. I've done some testing on the range 301 - 302 range, so I'll take 301 - 310 for now.
That leaves a gap between 280 and 300.

With fatphil's client I'm currently using an alpha of 1.000. This setting was the fastest for me.

FatPhil
01-22-2003, 07:38 PM
Originally posted by MikeH
Phil,

Just to check the .bat save file will work as a service, I just tried the following line in a .bat file

nbegon64 -p170886000000-175000000000 -ssob.dat

Launching the .bat file with FireDaemon is a breeze. I look forward to tomorrow's version.

Out at a gig tonight ("The Crown" from Sweden - amazing!) so I won't be up till /late/ tomorrow...

I decided that a parameter file was a good idea, and I also decided that the best format for a parameter file was to look like a batch file command line! i.e. I won't need to change the code I've already written. I'm glad the batch file works as is though.

Bed...
Phil

RangerX
01-22-2003, 07:48 PM
Hey I answered my own question about the files. I don't know if the info will be useful to anyone else but in XP at least the file writing is saved as it writes them. So in other words if you're running the program and say... you're computer crashes/freezes, then all you have to do to start over is open up the current SoB.del file and start from the last p listed there.

That's why I was asking, because I'd prefer to run the program on the full range instead of breaking it up to make sure I don't lose anything in the case of a freeze. But now I don't have to worry about it.

PS. Of course the addition of the batch file generator makes all of this completely useless ;)

paul.jobling
01-22-2003, 08:18 PM
Well done Phil on the NBeGone stuff, it really pushed me into looking at SoBSieve and the bottlenecks in there. I found one, and now I think that - while I don't have access to every platform - this new version of SoBSieve should do the business.

NOTE that alpha should be set to 1 now!

This also automatically starts sieving when you set it off. I will be looking to get more performance out of this as well, as NBeGone pushes it close for *very* large values of p... Phil, how about I take Windows and you have the rest of the World?

Regards,

Paul.

Paperboy
01-22-2003, 09:53 PM
Nice speed up in the sobsieve client. It seems to be 2x or more as fast as the 1.07 client on my p3

frmky
01-22-2003, 10:00 PM
On my Athlon XP 1800+, the new SOBSieve runs at 26450 p/s which is MUCH better than the previous 11890 p/s and a bit faster than NbeGon's 24485 p/s. Good work!

Greg

jjjjL
01-22-2003, 11:01 PM
Paul does it again! I knew he could. :)

My Athlon 1.33GHz went from 9500p/sec -> 19k p/sec.

I updated the download on the sieve page:

http://www.seventeenorbust.com/sieve/

The new download has a slightly higher sieve file as well. There is also a new link to Phil's site.

DON'T FORGET: set alpha back to 1 for the new SoBSieve.

-Louie

smh
01-23-2003, 05:09 AM
Thanks Paul, with Alpha=1 SoBsieve works about 4% faster on my PIII450 compared to NbeGon64.

I had some problems starting the client though. Because i was running NbeGon64 i renamed the status file. After starting SoBSieve110 i got a popup to enter the range to test, but after clicking OK, the popup won't go away and the sieving won't start.

I created a new status file with an old version of SoBsieve and after that starting the newest version it gives a popup with the range i entered in the old client, i press ok and the client starts sieving.

So, to me it looks like the newest version needs a statusfile, but doesn't create one itself.

FatPhil
01-23-2003, 06:12 AM
Originally posted by paul.jobling
Well done Phil on the NBeGone stuff, it really pushed me into looking at SoBSieve and the bottlenecks in there. I found one, and now I think that - while I don't have access to every platform - this new version of SoBSieve should do the business.

NOTE that alpha should be set to 1 now!

This also automatically starts sieving when you set it off. I will be looking to get more performance out of this as well, as NBeGone pushes it close for *very* large values of p... Phil, how about I take Windows and you have the rest of the World?


Take windows. Where are you going to take it.

However, purely for completeness I compiled a windows version of my latest version, just in case anyone wants a copy.

Available from
http://fatphil.org/maths/sierpinski/bin/
are the following
007 - with the resume data (put into a file 'SoB.bat')
008 - like 007, but faster too! (linux/win/sun/alpha so far)

It would be silly for me to give a figure comparing speed ratios, given how weird my machine behaves on SoBSieve, but shall we just say that I wouldn't be running 007 if I were you!

I have only tested using my old favourite ranges (I can verify them by inspection now as I've seen them so many times), but it would be nice if someone who did one of the lower ranges (more factors) could either verify, or issue a bug report...

Note - the optimal dimensions have changed: -d=2.9 works well for me, but 2.7-3.1 were pretty flat. It'll depend on architecture

Phil

olaright
01-23-2003, 06:38 AM
:cheers:

Why don't you two exchange code to make one SUPER fast siever!


Btw do you guys profile your executables? I heard that can boost performance considerably too.

Cheers,

Ola

paul.jobling
01-23-2003, 06:41 AM
Phil,

There is an obvious need for a non-Windows sieving client and your software fills that role very well. But for Windows I have got many optimisations to come and it is crazy to be this competitive - I have got an easy 10% improvement with a couple of one line changes, but I want to spend some time getting it all sorted and solid and get further improvements rather than rushing out updates in an optimisation battle.
Cheers,

Paul.

paul.jobling
01-23-2003, 06:42 AM
Why don't you two exchange code to make one SUPER fast siever!

I offered to do exactly that, but Phil didn't want to play ball :bang: . Which just spurred me on...

Cheers,

Paul.

smh
01-23-2003, 07:00 AM
Great, back to Phil's N-be-Gone

with -d=1,3 (works faster then -d=3 for me) it's about 40% faster compared to SoBsieve110

It's over 3 times faster compared to the original SoBsieve

ceselb
01-23-2003, 07:13 AM
My main reson for switching to NbeGon was the long time to complete my range. With these new versions it's now down to managable times (32 vs. 15 days). Now I've switched back to SoBSive again.

Advantages of the two programs:
SoBSieve
GUI.
No need to tinker with command lines or shortcuts / scripts.
Rate is viewable.

NbeGon
Run as service / at startup.
No need to remove lines from output file.
Runs on non-windows boxes.

ltd
01-23-2003, 07:29 AM
Hi,

only to correct the last list of reserved ranges.
I have reserved 260 to 275.

Lars

paul.jobling
01-23-2003, 07:49 AM
I've got a version in progress that already has a 40% improvement over the last version- and that is at very high values of p where the software was (relatively) weak before. I have also fixed the problem with having to click four times when it in in the system tray. I'll keep on improving this before sending out a 1.11 release.

FatPhil
01-23-2003, 08:04 AM
Originally posted by paul.jobling
I offered to do exactly that, but Phil didn't want to play ball :bang: . Which just spurred me on...


I seem to remember you inquiring if we could merge my hashing with your maths, and I said that I can't split my code easily, it's too tightly coupled (do one job, and do it well). I did say I could offer you a doOneP(long long p) interface, and if that wouldn't do I asked exactly where you would like me to try and cleave it, for me to mull over while I'm in the code, but you didn't reply.

The _only_ cleaving I can currently do is to separate the filling of the hash table from the rest of the code. My 52+bit maths is very clumsy (as it's C), and I'm sure that you could speed that up noticably.

My current test version is much faster than 008, and even then I've not run out of ideas, and the code's getting more chaotic as I try these new things out - i.e. my algorithm is still changing. However, what I'm also doing while going through the code is stripping out the dead wood, the things that failed. So hopefully I will soon end up with a version that contains just what is needed and nothing more, and that's possibly something that you can work with. However, as I say I see nowhere to merge algorithms, as mine's pretty much one monolithic block.

Mail me, we can talk algorithms. If I can do it in C you can do it faster in assembler, but I genuinely believe it makes sense to get the algorithm correct before forking too much of it onto an assmebly language route.

Phil

smh
01-23-2003, 08:57 AM
The reason i installed the fastest version available at this moment is that i'm going on a vacation this weekend and i want to keep the program running on my office computer.

If any faster version will be available by tomorrow, i'll install that one, since i won't be able to upgrade for the next 4 weeks (although i doubt my pc will keep on running for that long).

FatPhil
01-23-2003, 09:23 AM
Originally posted by smh
The reason i installed the fastest version available at this moment is that i'm going on a vacation this weekend and i want to keep the program running on my office computer.

If any faster version will be available by tomorrow, i'll install that one, since i won't be able to upgrade for the next 4 weeks (although i doubt my pc will keep on running for that long).

I'll have my fastest algorithm (i.e. no plans for any new major changes) ready by this evening. After that it I'll have nothing but micro-optimisations to do (loop unrolling etc. that might be platform specific) but it makes sense to draw a line under it before I contort the C out of current relatively simple state.

Phil

dmbrubac
01-23-2003, 09:29 AM
My completion estimate has dropped from 33 days to 12 due to program optimizations! I tried Phil's for a while but I'm back to Paul's (1.10) because sobseive shares it toys better with sb.

Thanks again guys

Mystwalker
01-23-2003, 10:46 AM
@ltd:

Oops, bad typo. :( I'm sorry!

So here's the corrected table:

0 - 25G Louie
25 - 50 ceselb
50 - 75 kmd
75 - 100 cjohnsto
100 - 125 Nuri
125 - 130 McBryce
130 - 150 RangerX
150 - 155 nuutti
155 - 175 MikeH
175 - 200 paul.jobling
200 - 210 dudlio
210 - 215 smh
215 - 225 dmbrubac
225 - 250 EggmanEEA
250 - 255 frmky
255 - 260 Alien88
260 - 275 ltd
275 - 280 Mystwalker

300-301 Paperboy

btw. I'm using NbeGon_008 with an d=0.8 as it's the fastest setting for me so far.
There are ~15x as many overflows as with d=2.8, but only in the first column.

FatPhil
01-23-2003, 10:49 AM
Originally posted by dmbrubac
My completion estimate has dropped from 33 days to 12 due to program optimizations! I tried Phil's for a while but I'm back to Paul's (1.10) because sobseive shares it toys better with sb.

Thanks again guys

Have you tried setting the priority to "idle"?

Phil

Zonar
01-23-2003, 10:53 AM
I'm currently working on 301 - 302, as I posted yesterday. In that post I decided to take 301 - 310. Is is better to stop after the program has finished the 301.x range and then take 280 - 290, or should I go on with 302 - 310?

alexr
01-23-2003, 11:02 AM
I've got one, possibly two, machines without internet connections that could be used for sieveing. So i know where to get SoBseive or Nbegone... and where to post results... so where/who do i get SoB.dat

dmbrubac
01-23-2003, 11:14 AM
Have you tried setting the priority to "idle"?

Yes I did. I suspect that since there are actually 31 levels of relative thread priority under windows that there is a minor difference somewhere.

For instance, if both are running at "idle" Process Priority Class (which is, I believe, basically what you are changing when you modify the priority in Task Manager and exactly what you are changing when you modify the priority in SOBSeive), the threads could still be at different Thread Priorities (Time Critical, Highest, Above Normal, Normal, Below Normal, Lowest, Idle).

I may not be exactly correct here, but I think I'm close. Look at Analyzing Processor Activity (http://www.microsoft.com/windows2000/techinfo/reskit/en-us/default.asp?url=/windows2000/techinfo/reskit/en-us/prork/pred_ana_eznf.asp) for more info.

HTH

Dave

dmbrubac
01-23-2003, 11:16 AM
so where/who do i get SoB.dat

Download SOBSeive. Even if you don't use it, it will come with the SoB.dat file

Alien88
01-23-2003, 11:22 AM
I'm done sieving 255-260.

alexr
01-23-2003, 11:29 AM
i'll take 285-290

olaright
01-23-2003, 11:40 AM
This is getting a bit confusing, so I repost the whole thing again:

0 - 25 Louie
25 - 50 ceselb
50 - 75 kmd
75 - 100 cjohnsto
100 - 125 Nuri
125 - 130 McBryce
130 - 150 RangerX
150 - 155 nuutti
155 - 175 MikeH
175 - 200 paul.jobling
200 - 210 dudlio
210 - 215 smh
215 - 225 dmbrubac
225 - 250 EggmanEEA
250 - 255 frmky
255 - 260 Alien88
260 - 275 ltd
275 - 280 Mystwalker

285 - 290 alexr

300 - 301 Paperboy
301 - 310 Zonar


--> Two people needed for the remaining gaps!! (I cannot do it,
my processor is way too slow)

Paperboy
01-23-2003, 11:52 AM
I'm done with 300-301

MikeH
01-23-2003, 01:12 PM
All this competition does seem to be good! Good work Paul, good work Phil.

SoBSieve 1.1 running on an AMD XP 2100+ = ~40Kp/sec

Paul, what should I now be using for the alpha? Currently I'm using 1.0. Or should I just experiment?

One more request - we are half way there will being able to put SoBSieve in Windows start-up group, but we still get the dialogue box that asks "The sieving is from p = ... to p = ... - OK to go?", and you have to click yes. Please can you remove this dialogue box, and maybe display the range in the title bar "SoBSieve 1.1 (170.2G - 175.0G)", or maybe just display this somewhere else on window once it's running. But key point is - it should start without user interaction.

And can you clarify what happens on power failure? Does it pick up the last pmin= in the SobStatus file, or does it take the last factor in the file and pick up from there? Or something else?

MikeH
01-23-2003, 01:20 PM
Has anyone else tried to get Phil's NbeGon_008 (or the batch file it produces to be more precise) running as a service with FireDaemon?

It works fine with Win2K, but under WinNT using the same setting it doesn't work. Symptom is FireDaemon says the service has started, but when I look in task manager it isn't! Any clues anyone?

Alien88
01-23-2003, 01:33 PM
I'm grabbing 280-285 now..


0 - 25 Louie
25 - 50 ceselb
50 - 75 kmd
75 - 100 cjohnsto
100 - 125 Nuri
125 - 130 McBryce
130 - 150 RangerX
150 - 155 nuutti
155 - 175 MikeH
175 - 200 paul.jobling
200 - 210 dudlio
210 - 215 smh
215 - 225 dmbrubac
225 - 250 EggmanEEA
250 - 255 frmky
255 - 260 Alien88 [complete]
260 - 275 ltd
275 - 280 Mystwalker
280 - 285 Alien88
285 - 290 alexr


300 - 301 Paperboy [complete]
301 - 310 Zonar

Alien88
01-23-2003, 01:39 PM
I would like to divide this discussion into two new threads, so I am going to lock this one and start a new one.

The new ones will be:
Sieve Coordination Thread - please use this exclusively for coordinating your blocks
Sieve Client Thread - please use this to discuss the clients.

Thanks,
Alien88