It would be nice if we could actually figure out where the optimal range would be.
P4 power in sieving mostly relies on a fast FSB.Originally posted by Nuri
You would be better off to continue using your P4 for the sob client. Even PIII performs better than P4 at sieve (just for comparison: PIII-1000 @ 270kp/s and P4-1700 @ 170kp/s).
Using FSB800, I get 2*275 = 550kp/s on a P4 running at 3 GHz w/ Hyper-Threading enabled.
It's not as fast as the Athlons, but quite good anyway...
It would be nice if we could actually figure out where the optimal range would be.
I'm using the new 1M-20M Sob.dat. Should I still reserve my range with DC sieve or are the threads going to be merged some time soon?
Last edited by ceselb; 12-19-2003 at 08:02 PM.
I suspect the DC sieve thread is going to be closed and archived soon...Originally posted by rosebud
I'm using the new 1M-20M Sob.dat. Should I still reserve my range with DC sieve or are the threads going to be merged some time soon?
alrighty
I won't stop the ones wanting to fill the gaps. I've made a post in the DC coordination thread if anyone is interested.Originally posted by Mystwalker
I suspect the DC sieve thread is going to be closed and archived soon...
Last edited by ceselb; 12-19-2003 at 08:23 PM.
I have a small request.
Could anyone give me some information about the progress that has been made since the start of the sieve project (how many times faster the newest version is).
I was lost between page 10 and 15 of this thread.
thx and keep up the good work
Since the first version of the sieve our speed is around 25-30 times faster.
Or did you mean something else?
When you take this posting from Louie, there should be a performance increase of 400x since the days of NewPGen 2.70
I think SobSieve began with ~8 kp/sec on (then) fast systems, which would result in an approx. increase factor of 45.
Would everyone using sobistrator v1.15 or v1.16 please upgrade to the new v1.18?
I made a bit of a screwup there, so B2 at RieselSieve is getting all your factors. Luckily he's forwarding them to Louie along with the name of the user submitting them. Sorry about all that. Just please upgrade right away, and rest assured that your submitted factors are well on their way to their rightful home.
http://n137.ryd.student.liu.se/sobistrator_118.zip
Thanks,
Mikael
there's no fact.txt at directory.Originally posted by Keroberts1
(In response to Death)
yup right on. All you have to do when its done is copy and paste the contents of fact.txt to the sieve submittion window and the factors will be saved. Just remember to log in first. Also if you look at the fact. txt file adn see any factors that you hink may be about to get passed by the prp line then you should do your best to submit them as soon as possible. Resubmitting old factors will not cause any problems.
moved /ceselb
only factexcl.txt and factrange.txt
do you mean both of them? that's look like "fact*.txt" =))
I submitted both and receive some like 25 verified, 20 new....
so I must reserve new range.
and something strange
162000396491 | 24737*2^21846079+1
163889464229 | 10223*2^25017809+1
163891911307 | 55459*2^21457486+1
163889464229 | 10223*2^25017809+1
why no numbers between 162000 and 163000? may I broke something?
should I restart this from the beginning?
I guess I understood your problem.
You sieved 162 billion to 164 billion instead of 162 trillion to 164 trillion (or 162000 billlion to 164000 billlion).
That range is was sieved about 10 months ago, and it's factors were already submitted. That's why you could not find any new unique factors. Since that was the first time you've run the client and no new facotrs were found, the client did not find the need to create a fact.txt file.
You do not need to care for factexcl.txt and factrange.txt files. Just ignore them and leave as they are.
By the way, sieving 162000 to 164000 would take around 40 days at the fastest PC (or, roughly 1000 times the time it took for your PC to run the 162 billion to 164 billion range).
That is ok if you are planning to use a couple of PCs, or even if you run it under a single PC and if you are sure you will commit your PC for the coming days to work on the range you have and finish it.
But, if you are not that sure, I would recommend you to simply unreserve your 162000-164000 range from the coordination thread, then reserve and start with a smaller range (forex: a range of 100G, meaning 162000-162100 etc.)
Whichever you choose, I don't think there will be a problem.
Last edited by Nuri; 12-25-2003 at 06:30 AM.
pmin=170002478067419 @ 199 kp/s
pmin=170002479067459 @ 199 kp/s
pmin=170002480067479 @ 199 kp/s
pmin=170002481067511 @ 178 kp/s
pmin=170002482067519 @ 178 kp/s
pmin=170002495067959 @ 168 kp/s
pmin=170002496067977 @ 168 kp/s
pmin=170002497067979 @ 170 kp/s
pmin=170002498067993 @ 170 kp/s
pmin=170002512068483 @ 160 kp/s
pmin=170002513068541 @ 160 kp/s
pmin=170002514068577 @ 153 kp/s
like tihs. after restart it starts from 200 kp/s and slowly goes to 150 kp/s
As you already know, Mike's scoring script also checks the for gaps in the factors submitted so far. When there are no gaps left in your range that are so large that it is likely to harbor another factor, that range might already been completed but not declared as completed yet.Originally posted by Death
#= gap analysis indicates that the range is complete
171000-171500 Death # (ETA: mid feb)
what does # mean? there's only 2 days left to complete...
It's basically just a way to improve the information contents of this table...
When you have 2 days left, it's of course possible that you find more factors, but the dcript won't be on alert if you don't...
please explain what does it mean "gap"? gap can by between factors of IN factor?Originally posted by Mystwalker
As you already know, Mike's scoring script also checks the for gaps in the factors submitted so far. When there are no gaps left in your range that are so large that it is likely to harbor another factor, that range might already been completed but not declared as completed yet.
It's basically just a way to improve the information contents of this table...
When you have 2 days left, it's of course possible that you find more factors, but the dcript won't be on alert if you don't...
and I prefer to complete range by myself not to abandon it. just to be sure...
When you're nearly done with your range the gap checking script might think that you're done.
Don't worry, nothing will happen (unless you disappear for a month or two).
Huh huh...Originally posted by ceselb
When you're nearly done with your range the gap checking script might think that you're done.
Don't worry, nothing will happen (unless you disappear for a month or two).
171473269359173 | 55459*2^10274926+1
new one.
The gap is the distance between to factors (the p's - like the range between the 171473269359173 you've just found and the next factor). If it is a lot bigger than usual in that range, it is likely there are still factors missing.Originally posted by Death
please explain what does it mean "gap"? gap can by between factors of IN factor?
Didn't mean that, as the gap analysis can only guess, plus it has to take into account that the factor density might to sparse at that paricular range. If it is not, you'll still find some factors, so abandoning ranges is definitely not feasible.and I prefer to complete range by myself not to abandon it. just to be sure...
seems that somebody turn off power at saturday. my current range must been completed at sunday, but theres one day left =((
>> If it is not, you'll still find some factors, so abandoning ranges is definitely not feasible.
yes I found one more factor. but there's 22 hours left. hope to find more...
and another question.
how many WU's will save one factor?
That's normal.
The gap checker looks at if the gap is larger/smaller than the largest acceptable gap for that range.
As I understand it*, the aim of the gap checker is to avoid missing any factors due to human error. If there is a gap that is larger than the largest acceptable gap for that range, and the user claims that the range is complete, then we might be missing some factors (i.e. the user forgot to submit them etc.). On the other hand, it's almost for granted that there will be at least some factors at a range that is as large as the largest acceptable gap.
It's something like; every square is a rectangle, but every rectangle is not necessarily a square.
* In fact, for me, it's more like a part of many fancy stats to play with. The more, the better.
I'd just say it's an alerting tool. It's designed to produce next to no false alert, which means that it generously overlooks smaller anomalies.Originally posted by Nuri
It's something like; every square is a rectangle, but every rectangle is not necessarily a square.
That way, it is very likely to find some factors in a range even after the gaps have been "accepted"...
latest record
p = 666000078004351 @ 195kp/s
and it waits half a hour for now....
why it stops???
proth_sieve_smov.exe -i 1 -vv
p = 666000078456017 @ ???kp/s
and stops. can it be a factor so it takes a hour to check it???
UPD. for now it hangs for three hours and nothing.
maybe there's a problem in a client?
The same happens here. That's strange, because the client works for higher p values. So, it's not something to do with a boundary etc.
For some reason, it stucks between 666000078456017 and 666000078456023.
I dunno why it happens, but it will run again if you set pmin to 666000078456024.
PS: Death, I also checked the range with one of our older clients (SoBsieve 1.10). There's no factor at that specific point.
Last edited by Nuri; 02-10-2004 at 12:48 PM.
Ah, I now remembered that one of my macines was stuck for four days last week at somewhere around 184.64T something. Unfortunately, I could not notice it until I came back from holiday.
Anyways, in that case, it started running without a problem after a reboot. So, I'm not sure if it's due to the same reason.
Don't worry. I checked them all.
same shit
p = 666000350160731 @ ???kp/s stucked.
and now there's 10:10 and i don't know how long it hangs at night =((
waste of time...
where's the author??
PS
that's not an optimization fault.
_cmov and _sse2 and _sieve.exe stops all the same.
it won't start even at 742 so i checked it with SoBSieve.exe ant it works up to 666000354181219 wery well.
so I continue from thith point with _sse2.exe again.
hope to hear Mikael Klasson and Paul Jobling soon...
I have reserved the following range and seem to have the same problem.
After 666666110000350 it won't move any further.
Must have been hanging there all over my vacation for two weeks or so...
Sorry to hear about the problems, but at least I think I know what's causing it. Give me two days or so and I'll try to have a fix for it.
What do you mean by this?Originally posted by Death
sieving countdown!
so the client sieve not from 100 to 150 but from 150 to 100.
may be this can raise speed a bit?
and this is useful have a choice!
Mikael
I mean exactly the same what I write.
The numbers must not increasing but decreasing.
So you can sieve not from XXXXX to XXXXX+500 but from larger number to smaller.
example
start sieve from 150000 to 145000
and it goes
150000 @ xxx it/s
149996 @ xxx it/s
149992 @ xxx it/s
etc...
btw, what is the problem with client? what's happen?
waiting for fix +)))
Thank you. I wasn't doubting that. It's just that the meaning of what you write isn't necessarily as obvious to me as it is to you.Originally posted by Death
I mean exactly the same what I write.
I'm afraid that's pointless. You'd still have to sieve the same numbers. You'd just get a rate that started low and increased instead (say rate 100kp/s at the start and 150kp/s at the end instead of 150kp/s at the start and 100kp/s at the end).The numbers must not increasing but decreasing.
So you can sieve not from XXXXX to XXXXX+500 but from larger number to smaller.
I just noticed you've been reserving extremely high ranges. Not only is it very inefficient for the project (the primes are denser in the lower ranges so it's always best if you reserve ranges that are as low as possible), but I also think the current client works fine at the ranges we're currently at (and a bit further).btw, what is the problem with client? what's happen?
waiting for fix +)))
Yes, the client is still broken and I will fix it, but you can eliminate the problem yourself and increase efficiency at the same time by switching to a lower range. To quote you: "waste of time...". 214T+ looks free.
The problem is that the hash table gets full.
Mikael
Originally posted by mklasson
Thank you. I wasn't doubting that. It's just that the meaning of what you write isn't necessarily as obvious to me as it is to you.
=) you are welcome. don't want to be mean, just can't explain. I'm not an english-language native spoking person, so it's usually hard to explain something with different words. my small wocabulary i though...
I'm afraid that's pointless. You'd still have to sieve the same numbers. You'd just get a rate that started low and increased instead (say rate 100kp/s at the start and 150kp/s at the end instead of 150kp/s at the start and 100kp/s at the end).
Yes, I know that's the numbers are the same, but sieving can start from highest and countdown to lower number. now they are increasing, but sieving can do the same stuff at other direction.
maybe this can raise speed a bit... and maybe this can be useful for something that we don't know....
I just noticed you've been reserving extremely high ranges. Not only is it very inefficient for the project (the primes are denser in the lower ranges so it's always best if you reserve ranges that are as low as possible), but I also think the current client works fine at the ranges we're currently at (and a bit further).
call this a whim. since I can't connect to sob.pns.net to running usual sob client (damn this pecky corp. firewall), i just want to do something funny.
if v.3 comes to life and ecc2-109 comes to an end, I can add some boxens to SoB. maybe few dozens...
Yes, the client is still broken and I will fix it, but you can eliminate the problem yourself and increase efficiency at the same time by switching to a lower range. To quote you: "waste of time...". 214T+ looks free.
I switched to slightly lower range for a few days... until I get fixed client.
The problem is that the hash table gets full.
well, good that we discover this before going to check really big ranges +))
Edited bold tags for readability. /ceselb
There you go. New clients are up at http://n137.ryd.student.liu.se/proth_sieve.php. No BSD clients atm, but they'll come up soon.
Highlights are:
5-10% speedup.
Lowered memory usage.
Fewer infinite loops.
Mikael
EDIT: Oh yeah, Linux and BSD versions still (like in 0.40) only measure time for current process so the rate isn't affected by other running programs. I'll try to fix that in the _next_ version...
on my machine its reporting a slight speed drop. However i had some doubts about the accuracy of the speed reporting in v.40 What is your opinion on this?
Ver. 40 reported my speed as 645kp/s and 699kp/s max. and lower depending on if I had anything else running such as a web browser. These seem a little high.
Ver. 41 reports my speed as 621kp/s max and lower of course if I have something else running.
Was ver. 40 just reporting the wrong speed and ver. 41 is showing the correct speed?
Or is Ver. 41 slower on my machine?
If it is slower I will change back to ver. 40 of course, but if ver. 41 is correct I will not .
Thanks
Last edited by Deoje; 02-12-2004 at 03:49 AM.