Page 4 of 10 FirstFirst 12345678 ... LastLast
Results 121 to 160 of 386

Thread: P-1 factorer

  1. #121
    Senior Member
    Join Date
    Feb 2003
    Location
    Sweden
    Posts
    158
    Originally posted by Mystwalker
    When does the program get the factor (if there is one within the bounds)? Always at the end of the run (read: step 2 100% done) or inbetween?
    After the GCDs (at the end of stage 1 and stage 2).

    As for finding factors, I got one last night:
    155994863214049 | 10223*2^4081145+1
    2^5*3*509*16547*192931+1

    I'm also using the optimal bounds, -x 45 no double check. One test takes about 2h15m on my 2GHz athlon.

  2. #122
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    Originally posted by Mystwalker
    I have completed only 2 or 3 tests so far. But 2 are almost ready.
    When does the program get the factor (if there is one within the bounds)? Always at the end of the run (read: step 2 100% done) or inbetween?
    It also creates a fact.txt file in the folder it is run at. Not yet for me after 7 completed tests.
    I'm also using the optimal bounds, -x 45 no double check. One test takes about 2h15m on my 2GHz athlon.
    BTW, mklasson, you mean 0 by no double check, right? If that is the case, it means that PIVs should not be used for P-1 as well, because it takes ~6 hours per test in my PIV-1700 (and garo previously wrote 7 hours to finish a test on a P4-2533).

  3. #123
    Senior Member
    Join Date
    Feb 2003
    Location
    Sweden
    Posts
    158
    Originally posted by Nuri
    BTW, mklasson, you mean 0 by no double check, right? If that is the case, it means that PIVs should not be used for P-1 as well, because it takes ~6 hours per test in my PIV-1700 (and garo previously wrote 7 hours to finish a test on a P4-2533).
    Yes, 0. But the optimal bounds calculator seems to give different B bounds for different processors. Xrillo's P4-2400 gets B1=100000, B2=1575000 assigned while my athlon gets B1=80000, B2=440000 for the same range. Despite the bigger B bounds, his P4 _seems_ to take about as long as my athlon to complete a test. We'll see once he's done with one...

    Just found another factor btw
    43473882553589 | 22699*2^4081150+1
    2^2*41*257*5657*182333+1
    I've found 2 factors in 8 tests so far. Not bad when the estimated total was 0.38 factors found in 18 tests. Is the estimator fishy or am I just lucky?

  4. #124
    Originally posted by mklasson
    Is the estimator fishy or am I just lucky?
    You're just incredibly lucky. For comparison, I have done 42 tests on my main system and found 0 factors. I have also run 10 more tests on a couple P4 linux machines. 0 factors found there too.

    Considering each test has a probablity of around 2%, I'm only slightly unlucky at this point to have not found a factor yet.

    You and ceselb should go buy a lotto tickets.

    -Louie

  5. #125
    Senior Member
    Join Date
    Jan 2003
    Location
    UK
    Posts
    479
    It is interesting to look at the relative lack of factor finds from P-1 factoring to date. Active factoring at n>4M has been ongoing for more than one week, yet the bottom table in the sieving stats shows no noticable difference between 4M<n<5M and any other similar band of n.

    I guess this either shows that not much effort is being dirrected towards P-1 factoring, or it is not yielding factors at a worthwhile rate. Which do we think this is?

    I totally agree that we need to experiment with any and all ways of removing candidates, but right now it would seem that sieving is much more efficient.

  6. #126
    I think that the current bounds are suboptimal. That is probably the reason why factors are not being found fast enough. And of course, sieving is still more efficient. But P-1 gives us the opportunity to try and find factors for numbers about to be tested so that's good!

  7. #127
    Junior Member
    Join Date
    Feb 2003
    Location
    Linköping, Sweden
    Posts
    17
    Originally posted by mklasson
    Xrillo's P4-2400 gets B1=100000, B2=1575000 assigned while my athlon gets B1=80000, B2=440000 for the same range. Despite the bigger B bounds, his P4 _seems_ to take about as long as my athlon to complete a test. We'll see once he's done with one...
    Well, my computer finished a test during the night. It took 29570 seconds ( roughly 8 hours and 15 minutes ) at 50% cpu. No factor found.

    For those who really want to know:
    my computer is a p4 2.4GHz with 512MB 333MHz RAM. I guess the main bottleneck is the speed of my RAM...and the fact that I
    bought an intel m-board not letting me run the cpu at higher speeds. Oh well, at least it is very stable...

    As for the "p-1 factoring is less efficient than sieveing"-part -- I don't run the sieve anymore since I realized it was running 2-3 times faster on AMD based computers (since I run a p4). The factoring however is based on code from gimps...giving my p4 computer a slight boost over AMD based ones.

    Oh, and thanks for the new PRP client. I'm giving it the remaining 50% of my cpu.

  8. #128
    Originally posted by garo
    And of course, sieving is still more efficient. But P-1 gives us the opportunity to try and find factors for numbers about to be tested so that's good!
    This is the crux of the "efficiency" arguement. Yes, on a quantity basis, sieving beats factoring. But on a quality basis, I'd say factoring has a lot going for it.

    For now, sieving still produces 1 factors every 1.2G, but the value of each factor is very difficult to determine.

    As an example: How important was it to the project that I recently discovered that 10223*2^19993577+1 is divisible by 18201366731629? I have no idea. But it did take my system around 2 hours to discover just that one factor by sieving. This saved a test that would have been assigned so far in the future, that no one can accurately guage its value.

    Compare that to the value of a P-1 factor that saves a test a few days away from being assigned. That we can place a value on. I know that each factor garo found will prevent a test that would tie up the average SB user's machine for about a week. And his factors are so large, that no amount of sieving would have found the same factors in time.

    I'd go a step further and say that no amount of sieving would save a similar amount of tests in time. And by that, I mean a computer that sieved for a month would not find as many factors for numbers that would be assigned in the following month as a computer that did P-1 factoring for that same month.

    For sake of argument, lets say that my last conjecture is wrong. Sieving really is more effiecent in the scope I defined. Asymtotically, it still works out that even if my statement were wrong today (which i don't think it is), it will eventually be right after some small amount of additional sieving. That's because the factor density is falling off exponentially. So even though we currently need to sieve 1.2G for each factor, when the sieve reaches 50T it will require more like 2.2G/factor. Then when it reaches 250T, we'd have to sieve over 30G just to find one factor. Some people still don't even check out ranges that large. Even if you count on Paul to somehow double the speed of SoBSieve again, you're still to a range where sieving can't find factors much quicker than P-1 factoring.

    My point is that no comparison between sieving and factoring can avoid the issue of "factor quality". If the factor doesn't help SB in anything except the very-long term, I'd say it can't be compared to a factor that helps SB right now. That means that any realistic comparison needs some threshold or some gradient that accounts for that fact that not all factors are equally important to discover right now.

    A possible way to compare would be to only count numbers in the next 1M n-space. This would mean a reduction in sieve to factoring ratio by 1/17 = 6%. I'd say that a 0.5M n-space window would be more realistic and a 2-3M n-space window much too large. This would mean that to realistically compare a sieve rate to a factoring rate, you'd have to divide the sieve production by something between 15-25.

    Another way to look at it is to redefine what you think about the siever. The normal way to think about it is "it's a program that finds factors between n=3M and n=20M" but I think a better way to think about it is that "it's a program trying really hard to find factors around n=4M, but most of the time it fails and produces extra potentially useful factors that we don't need to know right now" or "a program that finds the factors I need right now rather slowly, but for a small performance hit, it also find a lot of factors I don't need to know right now too". basically, it's a cruel way to express the fact that the sieve isn't finding factors in the order that they would be the most beneficial to the project. ideally, the sieve would be able to find all the factors in smaller windows of n-values and then we'd move on to the next one and so on. but it doesn't... it finds factors over a huge range of n, all spread out. we need to recognize the benefit that factoring allows us in being able to focus our effort more effectively than sieving. This has incredible value.

    -Louie

  9. #129
    yeah, really no need for factors beyond 7M or even lower so, louie... so here's my question: how much faster would be nomal sieving by using a sob.dat from 4-5M and not 3-20M

    of course that would mean more sieving effort, because its nothing worth for all numbers over 5M, but think about it. or even better: could someone test it out, by making a small sob.dat.

    Thommy

  10. #130
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    effort is sqrt(n), so this means it is sqrt (17) = 4,12 times as fast.

  11. #131
    Senior Member
    Join Date
    Feb 2003
    Location
    Sweden
    Posts
    158
    I just tried this with sobsieve and a custom 4M-5M sob.dat. It was about 34% faster on my 2GHz athlon.

    Found a new factor overnight as well. Yay
    31734311619887 | 21181*2^4081700+1
    2*3413*63863*72797+1
    Success in stage 1 saved an hour.

  12. #132
    i've found a couple bugs in the optimal bound checker.

    for one, it overestimating the cost of each gcd. even though it was technically wrong, it doesn't seem to dramatically effect the the bounds finally chosen. in fact, it doesn't change them at all for my range.

    another thing was sorta bigger. the current client uses 60MB of mem by default in stage 2 for temp space. however, the bound guesser was assuming that it only had 24mb avilable. this makes it drastically underweight the B2 limit. what's more, if you increase the mem used to more like 128MB, it raises the B2 limit (and the factor prob) much higher still. this is because it will be able to process each prime in the B2 stage faster with more temp space.

    here's a quick comparison of P-1 on 10223*2^4150085+1

    24MB (current bounder):
    B1=55000 B2=288750 Success=0.017117 Squarings=135901
    60MB (what the current bounder should currently return):
    B1=65000 B2=601250 Success=0.021802 Squarings=159729
    128MB:
    B1=65000 B2=731250 Success=0.022867 Squarings=161565
    256MB:
    B1=70000 B2=910000 Success=0.024605 Squarings=174113
    512MB:
    B1=70000 B2=980000 Success=0.025033 Squarings=176561
    1024MB:
    B1=70000 B2=980000 Success=0.025033 Squarings=175109
    2048MB:
    B1=70000 B2=997500 Success=0.025134 Squarings=175734

    Do I think people should use 2GB of mem? No. But the current bounder is definately cuting short the potential of the program by being so stage 2 adverse due to a percieved lack of mem.

    The next version only has one memory value and it will be entered on the command line instead of assumed by the program. People can chose whatever they want but I wouldn't run it with less than 128MB.

    So the next version will allow you to decide exactly how much you value a factor (not just the double check flag) and will let you pick the amount of mem you can use. it also displays the type of processor it detects.

    anyone else have ideas before i compile v1?

    -Louie

  13. #133
    I am still convinces that the bounds estimation routine is overestimating the cost of an LL test. I wish I could be more specific but I've looked at the common.c code several times and I cannot see what is wrong.

  14. #134
    I am still convinces that the bounds estimation routine is overestimating the cost of an LL test.
    Might it be that it also takes a double check and a small error factor into account?

  15. #135
    Senior Member
    Join Date
    Jan 2003
    Location
    U.S
    Posts
    123
    I suggest that the status of P-1 factoring be saved. What I mean
    is that when I am in stage 1 23% complete, and I close the program and start it again, it starts over at 0% complete. When I do this with GIMPS, the program starts over at 22.7 % instead of 0 % like SB.

  16. #136
    smh,
    I think it is overestimating even after taking that into account.

    As an example, my P4 2533 takes 76 minutes to do a P-1 with the doublechecking flag on i.e. when it should not take the doublecheck into account. The prob of finding a factor here is 1.9% which means the average time to find a factor is 3900 minutes. This is 65 hours. I think a P4 2533 will finish a 4M test in less than 65 hours, don't you?

  17. #137
    Originally posted by garo
    I think it is overestimating even after taking that into account.
    Looking deeper at the code, I think I agree. The bound checker is doing a direct squaring to squaring comparison. It chooses the number of squaring operations it will do (aka the bounds it picks) based on the expected # of squaring operations the prp test will take. A prp test normally takes "n" squarings where n is the exponent in the proth number. However, I included some code in the bound checker that resized n to correspond to the FFT size it uses compared to GIMPS (twice as big). But since both the squaring routines (SB and SBFactor) already use the double-sized FFT, I was basically accounting for it twice.

    I'll be releasing a new version soon that fixes this so that the bound guesser is correct. Until then, at least set the double-check flag so that it only values a factor at around 3x the value of a prp test instead of 5x. This definately explains all the "why is it taking longer to find factors than run tests?" comments a few people made recently.

    The new version will have much better bound picking. Test times will drop significantly assuming people use a more modest 1.2-1.5 factor valuation. Thanks for your comments and data. I'll post the new version as soon as it's ready.

    -Louie

  18. #138
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    What I mean is that when I am in stage 1 23% complete, and I close the program and start it again, it starts over at 0% complete.
    That's strange. I don't have that problem...

  19. #139
    SBFactor v1.0

    http://www-personal.engin.umich.edu/...sbfactor10.zip

    New features:
    -optimal bound guesser not so incredibly wrong
    -"double check?" flag replaced with factor value
    -can use custom amount of memory
    -command line options simplified
    -potential bug with ECM curve counting fixed
    -better command line help
    -linux version included
    -run.bat now checks for command line arguments

    The bound guesser was just plain off. This version should be noticably more in line with the reality of how long prp tests actually take. It is even possibly correct now.

    The double check flag is removed. Now you can directly tell it how many prp tests you feel a factor is worth (instead of just getting to pick between 1.1 and 2.1). I'd say it should be over 1 and no more than 2. 1.5 is the default in the run.bat/run.sh.

    The memory thing is important. I would recommend using a minimum of 128MB. 256 is better if you can. The more memory it has for stage 2, the more temp space it can use and the faster it will run. The amount of memory you allow it to use will also effect the B2 limit it chooses since it knows to do more stage 2 testing when it becomes more efficient.

    The other main thing is that the command line options have been changed. If you use run.bat/run.sh it doesn't matter, but otherwise, run the program with no command line options to get an explaintion of the new format. Basically I just removed the random "-x", "ecm", and the filenames. I can't imagine anyone will miss them.

    The bounds chosen are now lower so expect tests to run much faster (and have smaller success values). Overall, the tradeoff should increase factor throughput. Right now, I'm testing the linux client on a P4 2.26GHz. It can do each test in 30 minutes. At that rate and with my expected factor values, it should find a factor about every 60 hours. I don't think it could do a single prp test in that time, much less the 1.2 I'm weighing it against.

    Upgrade, experiment, and let me know what you think.

    http://www-personal.engin.umich.edu/...sbfactor10.zip

    -Louie

  20. #140
    Senior Member
    Join Date
    Feb 2003
    Location
    Sweden
    Posts
    158
    Louie,

    very nice new client! Feels good to put some more memory to use.

    I thought I'd reiterate some of the suggestions that's been said before:

    - process the range in order of increasing n instead of k. Would make splitting ranges much easier.

    - ability to print a list of all the numbers that are to be tested in a specified range.

    - also, I think it would be nice to be able to specifiy the B1 and B2 bounds manually even when running in "range mode". This would be useful if I'd like to, say, increase B2 somewhat after a range is processed and then do a quick pass through it again.

    EDIT: Oh yeah, I think there's something not quite right about the memory usage. Even though I give sbfactor 600MB, task manager only ever says it uses 450MB. I've got 1GB in the machine, so that's not the problem. At startup, sbfactor reports 600MB granted to it.
    Last edited by mklasson; 06-26-2003 at 10:36 AM.

  21. #141
    Originally posted by mklasson
    - process the range in order of increasing n instead of k. Would make splitting ranges much easier.
    I know. Didn't make it into the new version. Next time.

    Originally posted by mklasson
    - ability to print a list of all the numbers that are to be tested in a specified range.
    That wouldn't be hard to add. OK.

    Originally posted by mklasson
    - also, I think it would be nice to be able to specifiy the B1 and B2 bounds manually even when running in "range mode". This would be useful if I'd like to, say, increase B2 somewhat after a range is processed and then do a quick pass through it again.
    That sounds kind of cool. OK.

    Originally posted by mklasson
    EDIT: Oh yeah, I think there's something not quite right about the memory usage. Even though I give sbfactor 600MB, task manager only ever says it uses 450MB. I've got 1GB in the machine, so that's not the problem. At startup, sbfactor reports 600MB granted to it.
    I think that's normal. I didn't actually test it with more than 256MB but there's nothing saying it will use the max mem. It also has to allocate it in 3 goofy sized chunks so that's probably why.

    here's how it does it:
    Code:
    /* Compute how many values we can allocate */
    
    unsigned long choose_pminus1_numvals (void)
    {
    
    /* Compute the number of gwnum temporaries available */
    
    	return ((unsigned long)
    			(((double) memory * 1000000.0 -
    			  (double) map_fftlen_to_memused (FFTLEN, PLUS1)) /
    			 (double) gwnum_size (FFTLEN)));
    }
    
    void choose_pminus1_plan (
    	unsigned long B,		/* Stage 1 bound */
    	unsigned long C,		/* Stage 2 bound */
    	unsigned long numvals)		/* Returns max number of temps */
    {
    
    /* Handle case where there is no stage 2 */
    
    	if (C <= B) {
    		D = 0;
    		E = 0;
    		return;
    	}
    
    /* Handle case where we are very low on the number of temporaries available */
    
    	if (numvals < 12) {
    		D = 210;
    		E = 1;
    		goto done;
    	}
    
    /* Try various values of D until we find the largest D that doesn't use */
    /* too much memory */
    
    	D = (unsigned long) sqrt (C-B) / 2310 + 1;
    	if (D > numvals / 480) D = numvals / 480 + 1;
    	D = D * 2310;
    	for ( ; ; ) {
    
    /* We guess at the best E for a given D */
    
    		if (D <= 180) E = 2;
    		else if (D <= 420) E = 4;
    		else if (D <= 2310) E = 12;
    		else if (D <= 6930) E = 30;
    		else E = 48;
    
    /* See if this combination of D and E will fit in memory */
    
    		if (D * D < C + C) {
    			if (D >= 2310) {
    				if (D / 2310 * 480 + E + 4 <= numvals) break;
    			} else if (D >= 210) {
    				if (D / 210 * 48 + E + 4 <= numvals) break;
    			} else {
    				if (D / 30 * 8 + E + 4 <= numvals) break;
    			}
    		}
    
    /* Try next smaller value of D */
    
    		if (D > 2310) D = D - 2310;
    		else if (D > 210) D = D - 210;
    		else if (D > 30) D = D - 30;
    		else break;
    	}
    
    /* Allocate more memory */
    
    done:	free (nQx);
    	free (eQx);
    	free (pairings);
    	nQx = (gwnum *) malloc ((D>>1) * sizeof (gwnum));
    	eQx = (gwnum *) malloc ((E+1) * sizeof (gwnum));
    	pairings = (char *) malloc ((D + 15) >> 4);
    }
    -Louie

  22. #142
    YEH! I found a factor

    238525666162151 | 5359*2^4104862+1

    238525666162151 = 2 x 5 ^ 2 x 17 x 277 x 22037 x 45971 + 1

    I just switched my range over to the new SBFactor v1.0 this morning so I'm lucky to have found this factor considering B1 was only 25000. This is probably about as unsmooth a factor as we'll ever hope to find.

    -Louie

  23. #143
    Moderator ceselb's Avatar
    Join Date
    Jun 2002
    Location
    Linkoping, Sweden
    Posts
    224
    Strange, my B1 was 15000 when I weighing it against 1.2 (B2=138750) . Had to use 1.5 to get 25000 (B2=256250).

    This is on a PIV1.5 using 256Mb.

  24. #144
    Originally posted by ceselb
    Strange, my B1 was 15000 when I weighing it against 1.2 (B2=138750) . Had to use 1.5 to get 25000 (B2=256250).

    This is on a PIV1.5 using 256Mb.
    I am using 1.5 too (w/ 256MB) so that makes sense. Your processor has nothing to do with the bounds it picks. If you use identical factor limit, factor value, and mem on two completely different machines but on the same numbers, it should pick the same bounds since the bound guesser doesn't actually look at what your processor is (only the internal multiplications routines look at that.)

    -Louie

  25. #145
    Two observations:

    Sbfactorer does not seem to be based on the v23.4 gimps sources. Stage 2 was rewritten two months ago to use more memory if it can, use fewer multiplies, and to be more efficient in low memory situations.

    There was a bug fixed where sub-optimal bounds where picked for the P4. An Athlon and P4 should pick very close or identical bounds.

    Don't expect great improvements, but every little bit helps.

  26. #146
    Originally posted by prime95
    Two observations:

    Sbfactorer does not seem to be based on the v23.4 gimps sources. Stage 2 was rewritten two months ago to use more memory if it can, use fewer multiplies, and to be more efficient in low memory situations.

    There was a bug fixed where sub-optimal bounds where picked for the P4. An Athlon and P4 should pick very close or identical bounds.

    Don't expect great improvements, but every little bit helps.
    You're right. It is using v23.4 objects but the main code in ecm.c isn't updated. I'll update it.

    -Louie

  27. #147
    Finally!!
    Found a factor

    127404468919821091 | 19249*2^4003538+1

    127404468919821090 = 2 * 3 * 5 * 19 * 67 * 71 * 227 * 2731 * 75793

  28. #148
    Moderator ceselb's Avatar
    Join Date
    Jun 2002
    Location
    Linkoping, Sweden
    Posts
    224
    This is insane, just found another factor.

    1518196112941553 | 55459*2^4151566+1

    2^4*11*307*69857*402223+1

    This was the last test I ran before 1.0 and lower bounds. Very lucky indeed.

    5 factors in 46 tests.

    Running the new test with very low bounds to try to do as many as possible before prp catches up in a few days.

  29. #149
    I love 67607
    Join Date
    Dec 2002
    Location
    Istanbul
    Posts
    752
    Feeling much better now.

    114256654697557 | 10223*2^4080101+1

    114256654697557 = 2 ^ 2 x 3 ^ 2 x 7 x 1097 x 17551 x 23549 + 1

  30. #150
    I think we need to use 1.0 and low bounds from now on otherwise prp will catch up pretty soon. We need to be doing 500 or so P-1 tests everyday. We need more machines!!

  31. #151
    As a followup, I rustled up a quick spreadsheet to tell us what will be the rate of finding factors for given "factor values" - factor value is the argument to sbfactor that says how valuable the factor will be. Here are the results with the columns
    factor value, B1, B2, chance of finding factor, number of squarings, time on a P4 2533 with 300 MB memory in seconds and average factor yield per day.

    Code:
    1.0	10000	82500	0.005837	22439	812.13	0.620977913
    1.1	10000	87500	0.005940	22878	828.02	0.61980967
    1.2	15000	138750	0.008419	33983	1229.94	0.591410121
    1.3	15000	150000	0.008605	34935	1264.40	0.588003736
    1.4	20000	200000	0.010668	45819	1658.32	0.555811284
    1.5	20000	215000	0.010881	47059	1703.20	0.551970763
    1.6	25000	262500	0.012631	57662	2086.96	0.522923298
    1.7	30000	322500	0.014358	69238	2505.93	0.495038975
    1.8	30000	345000	0.014613	71034	2570.93	0.49109224
    1.9	35000	385000	0.015929	80947	2929.71	0.469761816
    2.0	35000	420000	0.016285	83718	3030.00	0.464364356
    2.1	40000	490000	0.017761	95948	3472.64	0.4418974
    So in case we find that prp tests are outrunning us, ie P-1 is underpowered, it may make sense to bring the factor value down. I'm running my current batch of tests with fval=2.0 but may be better to lower it to 1.5 - if we end up being underpowered.

  32. #152
    Some more excel work. I tried to compute the average time it will take to complete a test with the preceding probabilities and times to test. Of course, sbfactor is now picking optimal bounds and given that two tests are saved it seems that factor value = 2 works best. Nevertheless I plotted the time to completion with p-1 factoring for various values of time it takes to do prp if a factor is not found. In this case, I do not really consider single or doublechecking and all I am concerned with is the amount of computation time a factor saves.

    Columns, avg. number of hours to clear a number.

    Code:
    1	10000	82500	39.99211167	49.93374167	99.64189167	149.3500417
    1.1	10000	87500	39.99240556	49.93300556	99.63600556	149.3390056
    1.2	15000	138750	40.00489	49.9207		99.49975	149.0788
    1.3	15000	150000	40.00702222	49.92097222	99.49072222	149.0604722
    1.4	20000	200000	40.03392444	49.92724444	99.39384444	148.8604444
    1.5	20000	215000	40.03787111	49.92906111	99.38501111	148.8409611
    1.6	25000	262500	40.07447111	49.94816111	99.31661111	148.6850611
    1.7	30000	322500	40.12177167	49.97819167	99.26029167	148.5423917
    1.8	30000	345000	40.12962722	49.98349722	99.25284722	148.5221972
    1.9	35000	385000	40.17664833	50.01735833	99.22090833	148.4244583
    2	35000	420000	40.19026667	50.02741667	99.21316667	148.3989167
    2.1	40000	490000	40.25418222	50.07657222	99.18852222	148.3004722
    If anyone could tell me how long a P4 2533 takes to do a test it would makes these results a bit more concrete.

    Also, these results show that in the long run, i.e. taking doublechecking into account, it makes sense to set the factor value to 2. But if prp testing threatens to overtake p-1 then lowering the factor value should not have too detrimental an effect on the overall throughput of the project.

  33. #153
    Originally posted by jjjjL
    It is using v23.4 objects but the main code in ecm.c isn't updated. I'll update it.
    The optimal bounds picker changed too.

    I remembered what the P4 optimal bounds bug was - the routine addr_offset in gwnum.c used uninitialized global variable FFTLEN rather than the local variable fftlen.

  34. #154
    I have switched to a 1.2 factor value for my current range as proth tests are catching up with the P-1 effort pretty fast.

  35. #155
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    Got an Access Violation (0xc0000005) error message while trying to start the factorer on a P4 machine. The program crashes right after the CPU detection.

    I'm pretty sure the address won't get you any advantage, but I post it anyway : 41f7b4

  36. #156
    488399926097359 | 4847*2^4026567+1
    1863592358663911 | 10223*2^4029221+1
    51314675885537 | 19249*2^4026722+1


    Hi Louie,
    I forgot to log on as I submitted these three factors today.I resubmitted after logging on but just wanted to let you know so the stats show up eventually.
    Thanks.

  37. #157
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    While we are at factor assignments:

    I forgot to login prior to submitting today, too.
    Luckily, it's only one factor - something around 27.837T

    Yes, < 28 points, but I need every one I can get to eventually beat priwo...

  38. #158
    Originally posted by Mystwalker
    Got an Access Violation (0xc0000005) error message while trying to start the factorer on a P4 machine. The program crashes right after the CPU detection.

    I'm pretty sure the address won't get you any advantage, but I post it anyway : 41f7b4
    Linux I assume?

  39. #159
    Sieve it, baby!
    Join Date
    Nov 2002
    Location
    Potsdam, Germany
    Posts
    959
    Nope, it's the windows version.

    Is there a Linux version of the factorer available after all?

  40. #160
    Originally posted by Mystwalker
    Nope, it's the windows version.

    Is there a Linux version of the factorer available after all?
    Must be poorly aligned code. That's no good. The linux code had to be realigned to make it not crash on P4s. I can't say I tested it on windows P4 systems but then again, I've heard other say that it worked so I thought it was fine.

    The linux version of SBFactor is in the same zip file as the windows version. the "sbfactor" and "run.sh" files should do the trick. I'm runing that version on a few P4s right now.

    -Louie

Page 4 of 10 FirstFirst 12345678 ... LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •