Page 13 of 18 FirstFirst ... 391011121314151617 ... LastLast
Results 481 to 520 of 709

Thread: Sieve Client Thread

  1. #481
    Senior Member
    Join Date
    Jan 2003
    Location
    UK
    Posts
    479
    (Finally) new sieve/factor scoring.

    First point. All factors submitted up to yesterday have been scored under the old system, and their scores have been frozen.

    All new factors will be scored according to the new system.

    A unique factor will score as follows:

    p < 40T, score = p/1T (i.e. as before)
    p > 40T, in 'active' window, 0 PRP tests performed, score = (n/1M ^ 2) * 125
    p > 40T, in 'active' window, 1 PRP tests performed, score = (n/1M ^ 2) * 125 * 0.6
    p > 40T, in 'active' window, 2 PRP tests performed, score = (n/1M ^ 2) * 125 * 0.2
    p > 40T, outside 'active' window, score = (as duplicate, see below)

    A duplicate factor will score as follows:

    score = p/100T, capped at 35.

    Excluded factors (those factors not present after sieving 100<n<20M to p=1G) do not score.


    The following aspects of scoring have changed.

    (1) A unique factor is the first factor found for a candidate (previously it was the lowest p).
    (2) Scores for each unique factor are remembered. Scores can go up (as an 'active' window moves to cover a factor that was above the window), but cannot go down (as a factor moves out of a window).
    (3) The 'active' windows are 0 < n < (<next double check candidate> + 200K) and (<next candidate>) < n < (<next candidate> + 500K). So currently that's about 0 < n < 590K and 4.150M < n < 4.65M.


    The scoring has been changed to reflect the benefit of finding factors to the main SB project. Anyone that chooses to sieve will find that many of their factors do not fall into an 'active' window, and thus will score very low until the main SB project (thus the main window) moves forward. Since the main window is 500K wide, on average about 1 in 40 factors will immediately fall into it. Anyone that chooses to factor will score immediately for all their finds (provided they stay ahead of the window!). I think a good analogy is factoring equates to cash in the bank, while sieving generates some cash and a lot of stock options.

    It's now up to everyone what they do. Right now I think sieving will continue to yield enough factors in the active windows for it to be better than factoring (and then there's all those stock options!?), but that's my gut feel, I haven't actually looked at it in any depth.

  2. #482
    Senior Member
    Join Date
    Jan 2003
    Location
    U.S
    Posts
    123
    Thanks Mike, now let's see who can be the first "millionaire"

  3. #483
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    Mike,

    Congratulations on the new scoring system.

    I think the score gained from finding factors for a candidate that is already PRP tested should be much less.

    Would you consider changing 0 < n < (<next double check candidate> + 200K) to (<next double check candidate>) < n < (<next double check candidate> + 200K)?

    Or alternatively, changing the 0.2 in p > 40T, in 'active' window, 2 PRP tests performed, score = (n/1M ^ 2) * 125 * 0.2 formula to a much lower value,

    Or simply putting a cap at 35 would serve the same purpose as well.

    What do you think?

  4. #484
    Senior Member
    Join Date
    Feb 2003
    Location
    Sweden
    Posts
    158
    Nuri,
    I like your cap idea. The issue is pretty moot until the double check limit reaches 1.2M though.

    Mike,
    looks like a fine scoring system, but one thing is unclear: what happens to a large factor for <double check limit>+200K < n < <next candidate> that scores 35 (the cap) until the double check limit rises and the factor's n value enters the lower active window, at which point the factor's score would decrease if it's allowed to change. Will it decrease?

    Mikael

  5. #485
    Senior Member
    Join Date
    Jan 2003
    Location
    UK
    Posts
    479
    Would you consider changing 0 < n < (<next double check candidate> + 200K) to (<next double check candidate> ) < n < (<next double check candidate> + 200K)?

    Or alternatively, changing the 0.2 in p > 40T, in 'active' window, 2 PRP tests performed, score = (n/1M ^ 2) * 125 * 0.2 formula to a much lower value,

    Or simply putting a cap at 35 would serve the same purpose as well.
    Nuri, thanks for the comments. Your first suggestion was how I'd originally planned to do it, but then I thought it was worth rewarding a factor however small, because it is final. I guess I could reduce the 0.2, but I figured that the scores will be so small in any case that I don't want to push it down too much. A factor at n=4M no PRPs = 2000, a factor at n=300K, 2 PRPs = 2.25. Since the margin between the double check SB and main SB are likely to remain this big, do we really need to penalise more?

    The cap sounds like the best idea. I'll have a think.

    looks like a fine scoring system, but one thing is unclear: what happens to a large factor for <double check limit>+200K < n < <next candidate> that scores 35 (the cap) until the double check limit rises and the factor's n value enters the lower active window, at which point the factor's score would decrease if it's allowed to change. Will it decrease?
    Mikael, again, thanks for the comments. Good point. Right now the score won't decrease, it will stay at 35. I feel that a different cap may be required, the current cap assumes that sieving could continue right up to 3500T, which is clearly crazzy

    I think a more realistic (high) limit is 500T, which means a cap of 5 - but need to think about the application.

    Mike

  6. #486
    What is th siever's limit? How large a value could i put in and still have it to the crunching? just wondering because I cn't test for myself until my range finishes and I currently have one that will be running until the end of summer

  7. #487
    Senior Member
    Join Date
    Jan 2003
    Location
    UK
    Posts
    479
    Another score update - note the file extention has changed.

    Now you can drill down on the details of your scores. This should make it a bit easier for sievers to follow what's happening (and what's going to happen) to their scores. You now get the following for all your unique factors.

    largest scores
    most recently changed scores
    factors next to enter main active window
    factors next to enter double check active window
    most recent finds

    Enjoy.


    P.S. I've also plugged a potential scoring flaw. Where a factor is to be scored as a 'duplicate', it will further be capped to what it would earn if it were unique.
    Last edited by MikeH; 07-26-2003 at 03:19 PM.

  8. #488
    Senior Member
    Join Date
    Jan 2003
    Location
    UK
    Posts
    479
    What is th siever's limit? How large a value could i put in and still have it to the crunching? just wondering because I cn't test for myself until my range finishes and I currently have one that will be running until the end of summer
    If you mean how many factors can you submit in one go on the web form, the answer is many hundreds. You shold be OK even leaving it running over the summer.

  9. #489
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    Mike, nice job again. It's now much easier to understand what's going on with the scoring.

    Just a little request:

    Would you consider adding a new column ("old score") to the user scoring details that will show the old score for the factor as well?

    Of course, this column will be empty (like the "Date score changed" column) if the score has not changed yet (or, the option is not still "in the money" ).


    EDIT: And a second request:

    Could you please limit the k/n pairs that show up in the "Factors next to enter (main) 'active window' " and " Factors next to enter (double check) 'active window' " portions of the detailed stats to those that have factors p>40T. The way it is now, it might create wrong expectations.
    Last edited by Nuri; 07-26-2003 at 05:33 PM.

  10. #490
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    Originally posted by Keroberts1
    What is th siever's limit? How large a value could i put in and still have it to the crunching? just wondering because I cn't test for myself until my range finishes and I currently have one that will be running until the end of summer
    Or, if you mean how large a p value you can enter, there are two answers:

    1- The limit of p value that the client will be able to reach is much higher than we would need to sieve. Louie once wrote that it should be something like 1,152,921,504,606,846,976 (or 2^60, or 1152921T).

    2- However, since the last couple of months (or you might prefer to recall it client upgrades), it is limited to 2^48 (or 281,474,976,710,656, or 281.4T). The siever will stop itself exactly at that number if you are trying to reach there from a lower number, or it will stop immediately if you try to start feom a higher number.

    I think this is a precaution taken to enable us focus our computing effort to the ranges wher we can find much more factors per given time. I'm sure it will be increased in case we reach numbers that high.

    BTW, you can test the limit yourself too. Simply create a new folder, copy your SoBsieve.exe and SoB.dat files to it, and start the client there. You will see that it will not run in case you write 281475 (or something higher) for the "Sieve from p =" cell.

    And of course, if you will be running two clinets at the same time, your sieve speed will reduce by 50% .

  11. #491
    I'm quite confident we'll pass that limit before sieving become not qorth our time

  12. #492
    however if someone wnts to ready a dat file for 20-40 million then perhaps at that point we should add those numbers so we can presieve them too since it won't take much more processor power to add them in too adn only then might those ranges be more qwort hwhile sieving.

  13. #493
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    I think it would be too early to worry about sieving n values above 20 million before the main project reaches testing numbers as large as 15 million (or may be even larger numbers - 18 million?). It's highly likely that we have at least a couple of years before we reach there.

    Also, hopefully a couple of primes will be found before we reach 20 million. So, if we start sieving values larger than 20 million today, a significant portion of that sieving effort will be pointless (since we will not need the information for these k values anymore).

  14. #494
    Senior Member
    Join Date
    Jan 2003
    Location
    UK
    Posts
    479
    Would you consider adding a new column ("old score") to the user scoring details that will show the old score for the factor as well?
    It is done (you'll see it in the next stats run), I've also added a "will be" score, but don't get too excited about the 50000 for those n~20M

    Could you please limit the k/n pairs that show up in the "Factors next to enter (main) 'active window' " and " Factors next to enter (double check) 'active window' " portions of the detailed stats to those that have factors p>40T. The way it is now, it might create wrong expectations.
    Ah, that wasn't suposed to be that way. That will need to wait until tomorrow for a fix now.

  15. #495
    Senior Member
    Join Date
    Jan 2003
    Location
    UK
    Posts
    479
    Could you please limit the k/n pairs that show up in the "Factors next to enter (main) 'active window' " and " Factors next to enter (double check) 'active window' " portions of the detailed stats to those that have factors p>40T. The way it is now, it might create wrong expectations.
    OK, that's fixed - now only scores that will change are displayed in these sections. Again, this won't be seen until the next update.

    Next task is to get this thing updating 4x each day instead of once.

  16. #496
    It may not be that long after all when we reach 20 million, only perhaps 2-3 years if computers get a small increase in speed and if our member ship increases slightly(finding another prime?). Also Ithought it was a fact that having more N values being sieved at once didn't slow the speed of the client. At least that was why i didn't think trimming the ranges would help?

  17. #497
    Senior Member
    Join Date
    Jan 2003
    Location
    U.S
    Posts
    123
    2-3 years is more than enough time needed to sieve ranges n>20M, and in order for there to be only 2-3 years left, we would need to discover at least 6 primes and have our computing power triple.

    When we reach n=10M or so, then we should trim our sieve ranges.

  18. #498
    Question:

    How come my sieve scores page: http://www.aooq73.dsl.pipex.com/ui/3962.htm Doesn't have all the cool stuff like most peoples? All I have is High Scores and Most Recent Finds....nothing else.

  19. #499
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    Hm, aren't there some posts missing?

    Anyway, here my question again:

    "Did you submit factors recently?"

    I think now mklasson said that the factors have to be over 40T and MikeH acknowledges that your factors are in the range of 3xT.

  20. #500
    Senior Member
    Join Date
    Feb 2003
    Location
    Sweden
    Posts
    158
    and are they >40T? I think they're the only ones subject to the new cool stuff.


    Mikael

  21. #501
    Senior Member
    Join Date
    Jan 2003
    Location
    U.S
    Posts
    123
    quote:
    _________________________________________
    Hm, aren't there some posts missing?
    _________________________________________

    I noticed that too, and at first I thought that something was wrong with my PC. I didn't delete those posts, so I'm guessing that there is either something wrong with the boards (more likely), or the original posters deleted them (which I doubt).

    P.S. I hope my post doesn't get deleted either.

    Update:

    quote:
    ____________________________________________
    Dyy had an equipment failure. Looks like some things had to be re-constructed from backups, which might explain the missing changes to the post....
    ____________________________________________


    Last edited by Moo_the_cow; 08-07-2003 at 06:28 PM.

  22. #502
    Mystwalker

    Can't sleep...was just reading over some of your old info and came across this at the bottom....probably read it 5000 times already...but let me ask a question.

    Wild stab in the dark:
    p=50T by the time you're testing n=3000000
    p=300T by the time you're testing n=6000000

    Does this mean we are PRPing much faster than you expected...or does it mean that sieving is taking out more factors than estimated? Was this prediction before 5 primes were found?

    Just asking...I know it was what it was titled...a wild stab in the dark...but I was looking at the fortune of maybe only having to sieve to 100T when we pull up on 6000000....or am I looking at it backwards?

  23. #503
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    Hm, did I really say that?
    Can't remember that... - can you give me the link to that post?

    In addition, I've never used the phrase "Wild stab in the dark" in my whole life AFAIK (possibly because I didn't know it so far? )

  24. #504
    Senior Member
    Join Date
    Feb 2003
    Location
    Sweden
    Posts
    158
    Short story: I've made a siever that's faster than sobsieve v1.26 under linux (latest available?), and sobsieve v1.35 on a windows p4. It's slightly slower on a windows athlon. I don't know about the speed for other processors. Different versions are available for pre-pentium pro and post-ppro (has cmov instructions). I've also compiled statically linked versions for linux -- I need it for one of my machines.

    The n lower bound is always set to 0 for the moment, so it doesn't make any sense not to sieve both 300K-3M and 3M-20M simultaneously with this program. Use the sob.dat that goes from 300K to 20M.

    I've tried to make the program behave as much like sobsieve as possible, so it uses a SoBStatus.dat file with the same layout for progress saves. Factors are written to fact.txt though. By default the program also finds duplicate factors and occasionally one that is outside the n range. Such factors are written to factexcl.txt and can be ignored completely with the cmdline switch "-d".

    Benchmarks for the range 50000G to 50000G+24M:
    332kp/s windows, athlon 2030MHz, proth_sieve v0.31
    338kp/s windows, athlon 2030MHz, sobsieve v1.35
    163kp/s windows, P4 2.4GHz, proth_sieve v0.31
    144kp/s windows, P4 2.4GHz, sobsieve v1.35
    234kp/s linux, athlon 1533MHz, proth_sieve v0.31
    144kp/s linux, athlon 1533MHz, ssobsieve v1.26

    If there's a later version of sobsieve for linux out there, the speed of that one would no doubt be much higher.

    Benchmark on ranges that are multiples of 8 million to get the most correct result. The reason is that I'm precalcing some stuff (sieving primes and factoring p-1) every 8M that's used throughout the following 8M range.

    Download from http://n137.ryd.student.liu.se/sob.php if you want to try it. Use the cmov version if you've got a pentium pro or later.

    Long story:
    I came across something called the Silver-Pohlig-Hellman algorithm for calculating discrete logarithms (DL). It's good when the group order is a product of small factors, with complexity O(sqrt(p)) where p is the largest prime factor. Using it you can calculate the DL modulo any divisor of the group order, so for our sieving effort you can get the proper DL modulo a suitably large divisor (not necessarily a single prime factor. You can calculate DL mod prime factors separately and combine with crt, or you can calculate the DL mod a product of prime factors. For small prime factors, it's often cheaper to multiply a few of them together and operate on the product) of order(2,p) and then use that information to do a much cheaper instance of the regular baby-steps/giant-steps for the n range (0-20M, say). Yes, this is what Robert Gerbicz talked about 2 months ago... It's a shame I didn't get my head around it then. Props to Robert. Also be sure to check whether the calculated DL matches what's known about the residue mod "t" of the remaining n values. If it doesn't, eliminate that k immediately.

    The problem is that you need to find order(2,p) quickly. Trial factoring every p-1 completely is too slow, so a much better approach is to sieve even numbers at the same time you sieve odd numbers to get the primes p. Save the found prime factors for all even numbers that are one less than a prime, and then use them later to calculate order(2,p).

    Another improvement involves the "t-power test": instead of checking (-k*2^a)^((p-1)/gcd(p-1,t)) you can test (-k*2^a)^(order(2,p)/gcd(order(2,p),t)), which eliminates some extra k.

    Paul, I'm pretty sure you could spice up sobsieve as well to gain somewhere between 20 and 50-ish percent with this approach. And don't hesitate to tell me if you feel like sharing your 25 instruction mul_mod. I'll happily kill for it.

    I also found what seems to be a small bug in sobsieve v1.35 while I was fiddling around. It reports 44448242721917 | 22699*2^12158038+1 as a new factor even though that k,n pair isn't included in the sob.dat file.

    Anyhow, I'd be happy to hear your results if you do try it out.

    Regards,
    Mikael

  25. #505
    Senior Member
    Join Date
    Dec 2002
    Location
    Madrid, Spain
    Posts
    132
    I have tried it (some minutes), but it haven't seen any rate output... I'm too lazy to do it manually .
    Could you add a priority level parameter?
    Thanks!

  26. #506
    Senior Member
    Join Date
    Feb 2003
    Location
    Sweden
    Posts
    158
    Currently the rate is only output at the end of execution. Just test a smaller range -- you can edit SoBStatus.dat manually or give the range on the cmdline, as in
    "proth_sieve.exe 50000000000000 50000024000000", in which case SoBStatus.dat is overwritten so be careful not to trash any work in progress.

    As for a priority parameter, use the "nice -n20" (for linux) or "start /LOW" (windows) utilities for now. I'm too lazy to do that.

    Mikael

  27. #507
    Senior Member
    Join Date
    Dec 2002
    Location
    Madrid, Spain
    Posts
    132
    K6-2@3x105 MHz, Win98 SE:
    Statistics:
    pmin : 60000000000000
    pmax : 60000015000000
    # Tested p : 471803
    # Tested k : 5661636
    # Whacked k : 4461754
    # Whacked p : 116182
    Total time: 324451 ms
    46k p/s.
    Wow!
    SoBSieve 1.21 -> 16 kp/s
    NBeGon 0.10 -> 6,6 kp/s

    Thank you very much! Now, I'll benchmark it on the Tualatin.

  28. #508
    Senior Member
    Join Date
    Dec 2002
    Location
    Madrid, Spain
    Posts
    132
    Tualatin@10x140 MHz

    -= Windows 2000 SP3 =-

    proth_sieve 0.31:
    Statistics:
    pmin : 60000000000000
    pmax : 60000010000000
    # Tested p : 314507
    # Tested k : 3774084
    # Whacked k : 2975683
    # Whacked p : 77661
    Total time: 45366 ms
    220k p/s.
    proth_sieve_cmov 0.31:
    Statistics:
    pmin : 60000000000000
    pmax : 60000010000000
    # Tested p : 314507
    # Tested k : 3774084
    # Whacked k : 2975683
    # Whacked p : 77661
    Total time: 44094 ms
    226k p/s.
    SoBSieveConsole 1.34: 230 kp/s

    -= Windows 98 SE =-
    proth_sieve 0.31:
    Statistics:
    pmin : 60000000000000
    pmax : 60000010000000
    # Tested p : 314507
    # Tested k : 3774084
    # Whacked k : 2975683
    # Whacked p : 77661
    Total time: 45981 ms
    217k p/s.
    proth_sieve_cmov 0.31:
    Statistics:
    pmin : 60000000000000
    pmax : 60000010000000
    # Tested p : 314507
    # Tested k : 3774084
    # Whacked k : 2975683
    # Whacked p : 77661
    Total time: 44271 ms
    225k p/s.
    SoBSieveConsole 1.34: 218 kp/s

    The improvement of the CMOV version is not as big as I expected.

  29. #509
    Senior Member
    Join Date
    Feb 2003
    Location
    Sweden
    Posts
    158
    > The improvement of the CMOV version is not as big as I expected.

    It's about twice that percentage, around 6-7%, on my athlon. Anyway, it's a cheap boost.

    You'll probably see those tualatin rates increase by 4-5% or something if you benchmark a range that's a multiple of 8 million or use a larger range to let the effect level out.

    Mikael

  30. #510
    OK... I have an AMD athalon with approximatly 1.7G. Should i use this, what do i need to do to install it. I haven'
    t downloaded it yet and I only want to if its going to be very easy to install and not take alot of effort to get going and update and so on as the current sobsieve program is. If this is all gonna happen though I would definatly love to see any sort of increase in sieve speed.

  31. #511
    Senior Member
    Join Date
    Dec 2002
    Location
    Madrid, Spain
    Posts
    132
    Just download and place it in the same dir as SoBSieve, or create another one copying inside the files sobstatus.dat and sob.dat. As with sobsieve, you only need to run the exec, Before that, I recommend you to do some benchmarking - maybe sobsieve is still the fastest on your machine!

  32. #512
    Senior Member
    Join Date
    Dec 2002
    Location
    Madrid, Spain
    Posts
    132
    proth_sieve is finding many factors for n>20M, which are currently considered "out of range factors". Could you make these factors be recorded into another file, like "factexc20M.txt". Maybe they will be useful in the future .

  33. #513
    I was about to ask the same question..until I read your post...what do we do with these little nuggets of future gold? They are just setting in a text file waiting to be turned in when the sieve effort goes n>20M. Be a lot of cheap points

  34. #514
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    Till we get there, I assume several text files gonna get lost due to multiple reasons. I suppose it's best to collect them just like the other factors.
    Are they only a bit bigger than 20M?

  35. #515
    Senior Member
    Join Date
    Feb 2003
    Location
    Sweden
    Posts
    158
    Well, I just uploaded proth_sieve v0.32 to
    http://n137.ryd.student.liu.se/sob.php

    + priority level parameter. Default is now idle for process and normal for thread. This translates to something pretty idle. You can specify both process and thread priority, so you should be able to get precisely what you want. The settings go from 0 to 4 for process and correspond to idle, below normal, normal, above normal, high. For thread the range is 0 to 5 and correspond to idle, lowest, below normal, normal, above normal, highest. On the linux version only process priority is used; the process' priority is then set to 20 - 10 * priority. In short, 0 is idle while 4 is highest, just like under windows.

    + writes factors outside range to factrange.txt instead.

    + rate is output at every update interval (default 10M). The rate displayed is the average during the last full 8M block.

    + the lower limit of the n range is now used. Sieving 3M-20M instead of 0-20M is something like 1 or 2% faster, so using the 300K-20M sob.dat and double sieving is still highly recommended.

    This version may be a really tiny bit faster than 0.31, but I'm pretty sure you won't notice it.

    Regards,
    Mikael

  36. #516
    Senior Member
    Join Date
    Feb 2003
    Location
    Sweden
    Posts
    158
    >Are they only a bit bigger than 20M?

    Some go up to about 35M (I figure it'll take a while before they become useful ). These extras have a bigger chance of popping up when p-1 is pretty highly composite.

    Mikael

  37. #517
    Had a PC in the shop getting w32.Blaster.worm strain C yanked from it...so it's sitting here all week while the owner is off on vacation...so I decide to install SOBSieve on it to benchmark the P4 since I only run AMD chips myself.

    I've never seen anything in my life more useless than a P4 2.4GHz Dell running WinXP Home with 128MB of SDRAM. Yep...128MB of SDRAM. I've seen faster results in the World Championship Blonde Hide and Go Seek competition filmed in stop motion video. Machine kicks major donkey in ECC2-109..but everything else it appears to be a black on grey paper weight.

  38. #518
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    Sieving isn't P4's favorite discipline. I'd use it for P-1 factoring or (due to the memory constraints) for the standard SoB - unless it's not connected to the internet from time to time...

  39. #519
    Senior Member
    Join Date
    Dec 2002
    Location
    Madrid, Spain
    Posts
    132
    I have run WinXP on my K6-2 at 300 MHz with 128 Mb of ram, when properly confirgured, it works well.
    As Mystwalker said, P4 are good for SB, P-1 factoring and Prime95 and aren't good for sieving. That's because their x87 floating point unit is very weak under heavy load, but if you use SSE2, you can get as many instructions per cycle as with Athlon's x87 FP unit.
    Plus the Pentium 4 is much more bus and memory speed dependant than any other recent CPU. SDRAM really kills Pentium 4's performance.

    Thanks mklasson!

  40. #520
    Senior Member
    Join Date
    Feb 2003
    Location
    Sweden
    Posts
    158
    http://n137.ryd.student.liu.se/sob.php contains a just released v0.33. Sorry about the abundance of new versions.

    This one's about 2% faster across the board.

    Mikael

Page 13 of 18 FirstFirst ... 391011121314151617 ... LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •