PDA

View Full Version : Sieve - question



Troodon
04-01-2003, 04:11 PM
Originally posted by Mystwalker
Just out of curiousity:
What happens when someone submits a factor for a k/n pair that is currently PRP tested? Will this test be aborted the next time the client tries to submit a block?

jjjjL
04-25-2003, 08:20 PM
right now there is no provision to deal with the case where a factor is submitted for a number under test. there is no way to signal the client to get a new test.

maybe this summer when i redesign the client i'll add a way to tell the client to get a new test.

at such high factor levels, very few tests fall into this case. if you feel differently, email with your reasons so i can think about it.

-Louie

Mystwalker
04-26-2003, 09:16 AM
there is no way to signal the client to get a new test.

What about the "server had no record of proth test" - thing? That sounds like it's possible to force the client to process a new test.

If submitting factors within a range currently under (a lot of) prp tests doesn't take away work from these tests, still leaving this range included in the sieving effort is only useful for a double check AFAIK.
So we could theoretically raise the upper limit to ATM 3.5M, which would speed up sieving by ~1.5%...

MikeH
04-26-2003, 10:43 AM
at such high factor levels, very few tests fall into this case. Looking at today's sieving stats (http://www.aooq73.dsl.pipex.com/scores.txt) , 30 candidates were eliminated in the range 3M<n<4M, also current candidates remaining in that range (excluding PRP) are 35995. There are currently 2809 PRP tests pending, almost all of them in this n range.

(2809 * 30)/35995 = 2.34

Which means that given even distribution, 2 PRP tests are currently being performed for k/n pairs for which factors are now known.

Again, given even distribution, each of those tests will be 50% complete. Thus about one whole PRP testing effort being lost every day.

I don't have too much historical data, so I don't know if yesterday was an average sieving day, but given the situation will worsen as PRP test time get longer or number of users increases, seems like a mechanism to inform the client to get a new test would be a good idea.

Mike.

Edit: I found some data from 14 days ago. Over the last 14 days ~16 factors have been removed daily in this range, so yesterday was above average.

Troodon
04-26-2003, 07:34 PM
I'm sieving the range 10660 - 11000, and currently it's sieved up to 10969,6 G. Here are some stats:
- 651 factors found
- 43 of them for 3M<n<4M. Of those, at least 20, when submitted, were smaller than the maximum n of the current test window.

smh
04-27-2003, 06:22 AM
I haven't really been following the sieving status lately.


- 43 of them for 3M<n<4M. Of those, at least 20, when submitted, were smaller than the maximum n of the current test window.

Without knowing the exact removal rate of of sieving vs. prp-ing, i think a lot more effort should be put into sieving.

Of course the most effective way of reducing candidates is finding a prime, but until that happens, the most effective way is sieving.

What is the current rate of removal compared to PRP testing?

MikeH
04-27-2003, 01:35 PM
What is the current rate of removal compared to PRP testing? Yesterday I submitted 60G worth of sieve results @ p~17T. This contained 90 factors, which covers a full 19.7M range of n. I am using a relatively new sob.dat file, so (almost) none of these should be duplicates.

This was all sieved on P3s, and I don't have any good PRP comparisons, but if we take my AMD XP+, it now takes about 3 days to PRP test @ n=3.5M. On the same PC, I could sieve 43G over the same 3 days.

So for the time it takes to complete one PRP test, I would have ~64 factors, of these 54 would be in the range 3.5<n<20M, and even focusing in on the near term, 5 would be in the range 3.5M<n<5M. Looks like sieving is still useful.

Hope that helps.

smh
04-27-2003, 02:34 PM
Okay, so with sieving you remove 64 candidates in the time you can prp one number around 3,5M. I would say, a huge amount of sieving is needed asap. Although it's very hard to say how much.

- It depends on what processor you're using. I guess a P3 and Athlon need to sieve much deeper comapred to a P4 (relatively slow at sieving, fast at prp-ing)

- A prime would remove a lot of candidates at once

- prp-ing get slower when exponents are getting larger. This is more then exponential.

- Sieving will speed up a little when you go deeper, but will remove less candidates in a given time.

- did i forget any?

So it's really hard to say how much further to sieve, but it's obvious that more effort should be put into sieving at this moment.

Mystwalker
04-27-2003, 02:49 PM
- did i forget any?

Maybe

- A prime would remove a k completely. When considering PRPing, this is analog to the removed k/n-pairs. But with sieving, the number of candidates does not affect speed, whereas the next prime would speed things up by 9% AFAIK.

MikeH
04-27-2003, 03:14 PM
When considering PRPing, this is analog to the removed k/n-pairs I also though it was linear by k, but having done a quick test for this post (http://www.free-dc.org/forum/showthread.php?s=&postid=26571#post26571), it appears it's not quite linear. Having removed 3 of the 12 k, I would expect a 33% increase in speed, but it was only 25%.

Also, depending upon which k is eliminated next will determine the net speed increase. Eliminating k=67607 would be best for sieve performance, but worst for SB overall, eliminating k=55459 would be worst for sieve performance, but best for SB overall.

smh
04-27-2003, 06:09 PM
Originally posted by Mystwalker
Maybe

- A prime would remove a k completely. When considering PRPing, this is analog to the removed k/n-pairs. But with sieving, the number of candidates does not affect speed, whereas the next prime would speed things up by 9% AFAIK.

Thats what i meant to say with:


- A prime would remove a lot of candidates at once

Mystwalker
04-27-2003, 06:52 PM
I just wanted to clarify it. ;)

jjjjL
04-27-2003, 07:53 PM
I like MikeH's analysis


Looking at today's sieving stats , 30 candidates were eliminated in the range 3M<n<4M, also current candidates remaining in that range (excluding PRP) are 35995. There are currently 2809 PRP tests pending, almost all of them in this n range.

(2809 * 30)/35995 = 2.34

Which means that given even distribution, 2 PRP tests are currently being performed for k/n pairs for which factors are now known.

Again, given even distribution, each of those tests will be 50% complete. Thus about one whole PRP testing effort being lost every day.


This is the right way to do this I believe.



ocusing in on the near term, 5 would be in the range 3.5M<n<5M. Looks like sieving is still useful.

This is one of the most straight forward ways to think about sieving vs prp. The "problem" of trying to weigh the chance of finding a prime using prp and "eliminating" multiple tests makes it hard to compare them directly. I would say there is a better than average chance of finding a prime before n=5M so only the factors found in this range should be used to directly speed compare sieving to prping.

Also, another pro for prping is that once the prime is found, a bunch of tests are removed AND sieving is faster from that point on. No amount of sieving can hope to do either of those things. I think someone determined that even if we sieve up to 200T, it still won't remove as many candidates as a single prime report will.

I have this naging feeling that a stunningly beutiful model could be constructed that took into account the likelihood of prime discoveries by n-level, sieve speed, expected sieve speedup, and prp speed to properly calculate the weighting of cpu between sieving and prping. Without knowing the form of this model, i can say that pedulum is swinging towards a heavier weighting of prp testing as more and more time goes by with no new primes. it may be that sieving is still underweighted, it's hard to say.

the main thing that stops me from working it all out is that i imagine most people will do about the right weight no matter what we determine. the weighting is probably not too sensitive to exact levels. meaning that even if i could construct a perfect model and the predictions worked out and i could perfectly control how cpu's are assigned between sieving and prping, i'd end up speeding up the discovery of the next prime by a day or two but no more.

give me your thoughts on all this. i'm going out to BBQ with friends. :)

-Louie

Nuri
04-27-2003, 10:56 PM
Does anyone have any ideas on how much of our resources are put to sieving vs prping currently?

I don't know how correct it is, but I tried to analyse into some stats to figure it out. Please comment if you have any corrections or other approaches.

Firstly, resources on sieving:

I guess, there are roughly 15 pc's used for sieving (The actual number might be more -30?-, what I mean is it's equivalent to 15 average pc's used 7/24).

Now, how I reached that number?

- There are 18 individuals with uncompleted ranges curently. I'm pretty sure, almost everybody is using only one PC for sieving. Also, at least half of us are not allocating that resource 7/24. Still, assuming 1.2 pc per user on the average, and also assuming only half of that is 7/24 whereas the remaining half uses less: there should be ~16 average pc equivalent => 18*1.2*(1/2*%100+1/2*50%).

- We're submitting roughly 300 factors per day for the main sieve.
This is calculated from number of remaining candidates n>4 million. This figure was 577273 on Apr. 22nd, and is 575974 now. Since there are no prp tests for n>4m, all of this is attribituble to sieving. So, there were 1299 unique factors submitted in last 5 days. Adjusting for 3-20m instead of 4-20m, and for duplicates (7.5%), that should be equal to ~1500 submitted factors (300 per day). => ~1500 = 1299 * (17/16) * (1+7.5%)
Assuming that an average pc sieves 12G per day, and there are ~1.5 factors per G at the average p levels we're sieving currently: there should be ~17 average pc equivalent => 300 / (12*1.5)

Secondly, the resources on prping:

- There were 817 distinct IP adresses active within the last day. Assuming 10% of the pc's actually prping did not report within last 24 hours (mainly because of unchecked report intermediate blocks. The pc's that could not report even one block within the last 24 hours would not affect the calculation much in terms of average pc 7/24 equivalent power), thats roughly 900 pcs. Also assuming only half of these 900 pcs were allocated to prping 7/24 whereas the remaining half is allocated less: there should be ~675 average pc equivalent = 900 * (1/2*%100+1/2*50%)

- Or, there were 318 tests completed within last 24 hours, 59 of which is coming from secret. So, 260 tests are within the main project prp range (3m-3.55m). Assuming it takes roughly 3 days for an average pc to finish a prp test within that range: there should be ~780 average pc equivalent = 260 * 3

If my above calculations are correct, we are allocating only 2% - 3% of our project resources to sieving.

So, sieving is already very much underweighted. May be, more that it should be.

OberonBob
04-28-2003, 03:46 PM
I say Seiving should only be done by computers that can't do prp tests. I was only seiving on my alpha box, until I lost access to that alpha. As soon as my new Sun server comes in :) I will start seiving again on that box.

It's a win-win. assuming there are enough non-Intel boxes that are accessible to people that have the interest.

Of course, we are forced to use the older Nbegone that hasn't been updated for some time.

just my 2 cents.

smh
04-28-2003, 04:17 PM
I say Seiving should only be done by computers that can't do prp tests

I still don't agree 100%

There are to many variables in here to say what's really the best, but to me it seems that the current range is still way undersieved.

Moo_the_cow
04-29-2003, 08:39 PM
quote:
________________________________________________
If my above calculations are correct, we are allocating only 2% - 3% of our project resources to sieving.
________________________________________________

Maybe. However, those 2%-3% of our project resources has
removed an equal amount and sometimes more candidates
for n>3M than the other 97%-98%, which only does PRP
testing.

Nuri
04-29-2003, 09:37 PM
Originally posted by Moo_the_cow
Maybe. However, those 2%-3% of our project resources has
removed an equal amount and sometimes more candidates
for n>3M than the other 97%-98%, which only does PRP
testing.

I agree. That's what I was also trying to point out. I think that we're undersieved and also sieving is underweighted (ie. we should put more weight on sieving at least until we reach 50T, as explained below).

This is also evident from the sieve stats. If I can become the third siever by far with only three PIII-1000s, this simply means almost nobody with sufficient resources is really pushing it at all. To be honest, I am really ashamed of being able to get the third rank so easily.

I really wish we had at least double (or triple) the resources we now have at sieving. Moving 30 additional pcs from prp to sieving for the next couple of months would not hamper the project much, but in fact it would boost the project going forward by clearing a significant percentage of the remaining candidates.

BTW, I was the one who suggested that sieving will clear roughly 47,000 candidates all the way up to 200T. May be, this was a bit misleading, so I should add another comment. Roughly 18,000 unique factors of that number should be within the 20T-50T range.

I strongly believe that with the latest client version, we can easily reach 50T within two or three months if we shift and allocate just a few percent more our resources to sieving.

Just a quick calculation:

Let's assume we allocate 50 pcs in total (including the current ones) to sieving for the next three months. Assuming a pc averages 10G per day, those resources will reach 50T in 2.5 months. => 50*10 = 500G = 0.5T per day --- We have 35T to cover up to 50T. --- 35T / 0.5T = 70 days. --- I haven't calculated the exact figure, but including the unique factors below 20T, I guess this should mean at least 23,000 unique factors. --- Roughly, 2,000 of these will be within the n= 3.5m-5.0m range (=23000*1,5/17).

In comparison, let's look at how many prp tests those 50 pc's can finish within the next 70 days. Assuming an average pc within those 50 will be able to finish a prp test (@ range n>3.5m) within 3.5 days, a pc will be able to finish 20 prp tests in 70 days. Multiplying this with 50, we reach 1,000 prp tests.

So, I think the result is obvious and there is no need for me to comment on that.

wblipp
04-29-2003, 09:53 PM
Originally posted by Nuri
I really wish we had at least double (or triple) the resources we now have at sieving. Moving 30 additional pcs from prp to sieving for the next couple of months would not hamper the project much, but in fact it would boost the project going forward by clearing a significant percentage of the remaining candidates.

I think the problem is that sieving requires too much hands-on activity for most people. IMO, the only way to get non-trivial resources devoted to sieving will be to create an automatic client that gets tests and reports results without intervention, as the primality test client does.

There are some interesting design tradeoffs in whether it should be a second client or an enhanced, multifunctional client. The multifunctional client could allow the project to adjust resources between sieving and prime testing - but that might irritate people that are here for the hope of fame in finding the next prime.

smh
04-30-2003, 09:25 AM
In comparison, let's look at how many prp tests those 50 pc's can finish within the next 70 days. Assuming an average pc within those 50 will be able to finish a prp test (@ range n>3.5m) within 3.5 days, a pc will be able to finish 20 prp tests in 70 days. Multiplying this with 50, we reach 1,000 prp tests.

So, I think the result is obvious and there is no need for me to comment on that.

One more thing to say. Besides removing more candidates below 5M, we get 20.000 factors FOR FREE in the 5 - 20M range.

Troodon
05-06-2003, 05:39 AM
Originally posted by jjjjL
right now there is no provision to deal with the case where a factor is submitted for a number under test. there is no way to signal the client to get a new test.

What happens in the following case:
- A k/n pair is sent for PRP testing
- While it's tested, someone finds a factor for it.
- The PRP testing of k/n expires.

Then, Is it re-assigned for PRP testing or is it removed?

Lagardo
05-06-2003, 11:51 AM
A few thoughts from an onlooker...


I don't think it is by itself a bad sign that there is the loss of (really less than)1 full PRP test per day because of sieved k/n that come in while the test is pending. As long as both are going on at the same time, this kind of thing is going to happen, and the question will be "how many lost PRP test per day do we find acceptable". The tone here around indicates that one per day is considered too much by many, but I sure haven't seen any real justification for this judgement.

According to the project stats, 372 tests were completed yesterday, so if one of them was eliminated while pending, that would constitute 0.3% of the total CPU power -- we lose a lot more to hanging/stalled/segfaulting clients, downtime on people's networks, etc.

This also means if we divert 0.5% of the PRP-machines to sieving, we're already losing more PRP power than we're losing by the occasional eliminated-while-pending k/n pair.

Since finding a prime eliminates all remaining candidates for that k, one could look at it this way: PRPing has a certain very small chance to do a whole lot of good, while sieving has a much higher (practically guaranteed) chance to do a little good. I can't do the math right now (no coffee yet) but it seems that one could compute an expectation value here -- there are x tests total left for n<5M, lets assume there's a good chance that there's one prime amongst those, then each PRP test has something of an 1/x chance of finding the next prime. It takes d days to complete such a test, so we expect to find the next prime in d/x days. During the same time, sieving will eliminate f factors.
So how many factors would we have to ascribe to be eliminated by a prime to "break even"?

Sieving is not an official task of 17ob -- if it were, there would be mention of it on the official website. As it is, nobody will ever even hear about it, unless they come here to this forum and read around a lot. I am not aware of any FAQ or explanation that contains mention of it or of a pointer to any kind of download (outside this forum) or anything. If one wanted more emphasis on it, one might conceivably have to write a new client for it -- but the current "unofficial" status of sieving has gotten it to the current point, which isn't as dire as it might seem on the surface, and if you start handing out clients for it, you're suddenly going to have 20% or 50% or such of the total CPU power on sieving. That would be a lot less optimal than the current state.


I apologize in advance if any of this doesn't make sense -- I'm really just an onlooker here...

smh
05-06-2003, 01:48 PM
Originally posted by Lagardo
A few thoughts from an onlooker...


I don't think it is by itself a bad sign that there is the loss of (really less than)1 full PRP test per day because of sieved k/n that come in while the test is pending. As long as both are going on at the same time, this kind of thing is going to happen, and the question will be "how many lost PRP test per day do we find acceptable". The tone here around indicates that one per day is considered too much by many, but I sure haven't seen any real justification for this judgement.


One test a day in itself isn't a lot, but don't forget that on many machines tests take much longer then a day to complete. It's also not only factors found for numbers which are currentely in progress, but also for numbers which already have been tested composite. The test wasn't necessary if we knew the factor beforhand.


Originally posted by Lagardo
This also means if we divert 0.5% of the PRP-machines to sieving, we're already losing more PRP power than we're losing by the occasional eliminated-while-pending k/n pair.


Not really, see above. And don't forget the situation only gets worse when tests get larger.


Originally posted by Lagardo
Since finding a prime eliminates all remaining candidates for that k, one could look at it this way: PRPing has a certain very small chance to do a whole lot of good, while sieving has a much higher (practically guaranteed) chance to do a little good.


This is wat makes it hard to say what the optimal sieving depth is. Sieving doesn't find large primes, and thats what it's all about. But it does help to elliminate many candidates much faster then prp-testing


Originally posted by Lagardo
...but the current "unofficial" status of sieving has gotten it to the current point, which isn't as dire as it might seem on the surface, and if you start handing out clients for it, you're suddenly going to have 20% or 50% or such of the total CPU power on sieving. That would be a lot less optimal than the current state.


At the moment we are really undersieved. It gets even worse when you keep in mind that numbers aren't proven composite until a factor is found or two prp tests with a matching residue are found. (GIMPS has shown that 3-5% of the tests is faulty (on numbers that are a bit larger so hopefully it's abit better for SoB)). 20% of the effort going into sieving wouldn't hurt for the time being, it would elliminate many candidates so the primes out there will be found faster.

OberonBob
05-06-2003, 02:25 PM
Originally posted by smh
At the moment we are really undersieved. It gets even worse when you keep in mind that numbers aren't proven composite until a factor is found or two prp tests with a matching residue are found. (GIMPS has shown that 3-5% of the tests is faulty (on numbers that are a bit larger so hopefully it's abit better for SoB)). 20% of the effort going into sieving wouldn't hurt for the time being, it would elliminate many candidates so the primes out there will be found faster.


So, when Louis makes an updated Sob.dat file, is he just going to remove all the k/n pair that have factors, or all the ones that have been PrP tested too?

The reason I ask, is that for the purposes of any future PrP double check we will want to leave in every k/n pair that we don't have a factor for, even if it has been tested for prime-ness. We need to have those factors, at least all the ones we can find.

Right?

Mystwalker
05-06-2003, 03:18 PM
As the number of k/n pairs doesn't affect sieving speed, I guess he'll leave them in.

Nuri
05-06-2003, 08:04 PM
While I strongly agree with smh in that we're currently undersieved, I don't think sieving really needs as much as 20% of the project's computing power. That figure suggests roughly 2T per day in aggregate. That would be too much.

In the end, even if we succeed to sieve up to 200T, this will eliminate ~7% of the remaining candidates up to n=20m. All others will be removed either by prp testing or by the primes we'll hopefully find.

Very roughly speaking, sieving currently consumes 2-3% of project resources. All we need is just 20-30 PCs more, and only for the next two or three months. This will be enough to get ~35% of the benefit we might expect from sieving up to 200T going forward. After that, it's highly probable that sieving will still be beneficial on a PC basis, but not as much as it is now, and the remaining 65% will be harder to attain.

I think this is the main reason why sieving is not marketed loudly by Louie. The discussions in this forum was enough to get sufficient contribution so far, and I guess it will also be sufficient to get a few additional volunteers (which will probably be enough).

smh
05-07-2003, 12:38 PM
While I strongly agree with smh in that we're currently undersieved, I don't think sieving really needs as much as 20% of the project's computing power. That figure suggests roughly 2T per day in aggregate. That would be too much.


I didn't say that 20% was really needed, only that it wouldn't hurt for a while.

Now i think of it, maybe not such a bad idea to direct 20% of the resource to sieving for a week or 2 to 3 , and after that go down to 5-10% for another month.

That way we can catch up a little, aftr that, the usual sieving can be used to catch up with tests that are getting larger.

Lagardo
05-07-2003, 03:00 PM
OK, so why isn't there some web-page with a "gentle introduction to sieving"? What it is, why it's useful, where to get the software, how to use it, what place it has in the overall SoB context -- that kind of thing.

I consider myself reasonably bright, but I have quite frankly only a hazy notion of sieving -- I mean, everybody has written a little "sieve.c" in college to compute a couple prime numbers, but we're talking numbers with hundreds of thousands of digits here and quite frankly half of the lingo has me confused more than anything (when someone says "50T", for example, I have no clue what they're talking about).

I glean from various side-comments, for example, that sieving can be done on older/slower/weird hardware, and I have an old Sparc5 that is currently gathering dust and might as well contribute (however infinitesimally) to the effort here - but I wouldn't know where to start...

I bet you'd get more people to help out if you spelled out these various things: what kind of work is required and what kind of software is to be run in what context and such...

Joe O
05-07-2003, 05:02 PM
Try this. (http://www.free-dc.org/forum/showthread.php?s=&threadid=2963)

Nuri
05-08-2003, 06:12 PM
Hi Lagardo,

There is a good discussion of how to start sieving on the thread Joe suggested. I'm sure you'll find it useful.

Also, I tried to explain the positioning of Main sieve and DC sieve sub-projects within the context of our main project, namely seventeenorbust, on this thread (http://www.free-dc.org/forum/showthread.php?s=&threadid=2772&perpage=40&pagenumber=2), at my post dated 04-23-2003. I'm sure you'll find it useful too.

Please take a look at those two threads before you proceed.

Now, I'll try to give answers to your specific questions above.


why isn't there some web-page with a "gentle introduction to sieving"?
That's hard to answer. Probably Louie did not want an unreasonably high percent of project resources to shift to sieving in an uncontrollable manner. Still, a good compilation of sieving f.a.q. sort of thing on this forum would be useful.


the lingo has me confused more than anything
I know it has already become confusing for people who are not following this forum on a daily basis, and I'm sure I'd be confused too if I did not follow so closely. So, don't feel bad about it. Since you mentioned having written a "sieve.c" previously, I think I can assume you know what sieving means. In practice, what we're doing here is the same thing. Only, the numbers are bigger.

So, we're already finished with 2, 3, 5, 7, ..., 97, 101, ....., 7901, ............., 100006138273, ............, 780687340571, ............, 3374139548467, ............, 12354780253471, .......... as divisors, and are now trying to find if the numbers we're searching are divisible by primes as high as ......., 19007281822801, ........... . ;)

Of course, the algorithm does something more complicated like the big steps baby steps thing, biquadratic test, kronecker test, and stuff like that (which eliminates the need to test for each and every prime as divisor), but as an average participant I really do not feel the need to understand them. All I need to know is there is a brainstorming among mathematicians, which in result brings me faster and faster clients.

Anyway, back to the previous paragraph, 19007281822801 is roughly in the range of 19 trillion. That's where Ts and Gs come into the picture. 19 trillion, for example, can be abbreviated as 19T (T=tera), or 19000G (G=giga).

To sum up, we're currently trying to find 14 digit divisors (from 10T to 99T - which will become 15 digits when (or if) we reach 100T) for our candidates which are within the range of 3,000,000 < n < 20,000,000 (that means rougly 900,000 to 6,000,000 digits) for 12 k values we are testing in the main project. I'm sure you're familiar with k and n, but to be sure, they are the k and n in the formula k*2^n+1 (as in one of the primes we previously found; 54767*2^1337287+1).


I glean from various side-comments, for example, that sieving can be done on older/slower/weird hardware
In fact, all PCs can sieve. There's no limitation to that. The problem is, those slower machines can not be used effectively in prp testing. That's why there are such comments. We are trying to get the benefit of sieving without slowing down the computing power at the main project. Since sieving does not need as much computing power as the prp client (and also does not need internet connection in the PC that it runs), it would be a better allocation of resources for the project to use faster machines in prp testing, and slower machines (which are otherwise gathering dust as you mentioned) and machines without internet connection for sieving. Thanks to sieving sub-project, there's no such thing as :trash: anymore.


I hope the links and comments above were helpful to you. Please do not hesitate to ask if you have further questions.

Hope to see you in sieve coordination thread (http://www.free-dc.org/forum/showthread.php?s=&threadid=2406) soon. ;)