I know it has the lowset probability
But my guess is 67607 by 7250000
What are/were the original projections for finding primes
x primes by n=y
x+1 primes by n=z
etc
It's been 7 months since we found a prime. I think that we are due to find one soon. Which k do you think it will be?
Code:K Probability 4847 0.98860 5359 0.99469 Prime Found 10223 0.99539 19249 0.85992 21181 0.98736 22699 0.84058 24737 0.98830 27653 0.94479 28433 0.91430 33661 0.98703 44131 0.99661 Prime Found 46157 0.87826 Prime Found 54767 0.99464 Prime Found 55459 0.99832 65567 0.83838 Prime Found 67607 0.79880 69109 0.97235 Prime Found
Joe O
I know it has the lowset probability
But my guess is 67607 by 7250000
What are/were the original projections for finding primes
x primes by n=y
x+1 primes by n=z
etc
In this thread last November I made the following model for the project evolution. At that time I argued that the first five had come "early" but that number six was not yet "late." By this model, Number Six came as expected and Number 7 is expected with n somewhere from 6.5 to 10.1 million.Originally posted by vjs
What are/were the original projections for finding primes
x primes by n=y
x+1 primes by n=z
etc
WilliamCode:1st 800K-1.1M 2nd 1.1M-1.6M 3rd 1.6M-2.2M 4th 2.2M-3.1M 5th 3.1M-4.4M 6th 4.4M-6.5M 7th 6.5M-10.1M 8th 10.1M-16.8M 9th 16.8M-29.5M 10th 29.5M-57M 11th 57M-127M 12th 127M-344M 13th 344M-1.2G 14th 1.2G-7.2G 15th 7.2G-108G 16th 108G-18T 17th above 18T
Poohbah of the search for Odd Perfect Numbers
http://OddPerfect.org
I'll be pulling for the underdog in this matchup: 67607
Me too, just because I remember how inaccurate and fragile those weights are ;-)
21181 - the one I am testing right now of course - pessimism or thinking about crunching 18T won't get you too far at all
How did you come up with this? The most probable still has no prime and the least and the 4th least probables have primes. I guess getting more accurate probabilities will be the next step in defining Sierpinski numbers.Originally posted by Joe O
It's been 7 months since we found a prime. I think that we are due to find one soon. Which k do you think it will be?
Code:K Probability 4847 0.98860 5359 0.99469 Prime Found 10223 0.99539 19249 0.85992 21181 0.98736 22699 0.84058 24737 0.98830 27653 0.94479 28433 0.91430 33661 0.98703 44131 0.99661 Prime Found 46157 0.87826 Prime Found 54767 0.99464 Prime Found 55459 0.99832 65567 0.83838 Prime Found 67607 0.79880 69109 0.97235 Prime Found
How many Sierpinski numbers have been proved?
ill bite and take the one with the biggest probability: 55459 (just because i like bigger numbers )
i was only nine days off from the 2m digit test date (woo 3rd place!) but im thinking we'll hit a price within the next 6-8months.
There's a Java applet online that will compute the "probabilities." (They're actually densities, not probabilities, where a density of 1.000 is equal to the density of primes in a random sequence of integers... see Mathworld's page on the prime counting function for more on that.) Without getting into heavy math, the density is estimated basically by picking a k and then counting the number of n candidates remaining after a sieve to a certain threshold.How did you come up with this? The most probable still has no prime and the least and the 4th least probables have primes. I guess getting more accurate probabilities will be the next step in defining Sierpinski numbers.
Infinitely many, if you ignore the requirement that k be odd If k is a Sierpinski number then 2k is also a Sierpinski number, since (2k) * 2^n + 1 = k * 2^(n + 1) + 1.How many Sierpinski numbers have been proved?
Even when considering only odd k there are still an infinitude of Sierpinski numbers (Sierpinski himself proved this although it wasn't until 1960 that the first actual Sierpinski number was identified). Of course only a finite number are and will ever be known.
These are the densities I used. They should correspond to those obtained from the Java applet.Code:K Density N Probability 4847 0.09877 6578031 0.98860 5359 0.11761 5054502 0.99469 Prime Found 10223 0.11875 6578105 0.99539 19249 0.04339 6578078 0.85992 21181 0.09649 6578060 0.98736 22699 0.04054 6577318 0.84058 24737 0.09820 6578191 0.98830 27653 0.06394 6578097 0.94479 28433 0.05424 6578017 0.91430 33661 0.09592 6578208 0.98703 44131 0.14273 995972 0.99661 Prime Found 46157 0.05424 698207 0.87826 Prime Found 54767 0.12846 1337287 0.99464 Prime Found 55459 0.14102 6578218 0.99832 65567 0.04567 1013803 0.83838 Prime Found 67607 0.03540 6577227 0.79880 69109 0.08906 1157446 0.97235 Prime Found
1-EXP(-2*Density*LOG(N)/LOG(2)) Is the formula I used.
The most probable still has no prime and the least and the 4th least probables have primes.In theory, there is no difference between theory and practice. In practice, there is.
Joe O
When we reach a point where tests become to large to handle on a single computer are there other types of prime testing that can easily be distributed onto seperate machines and still finish i nsomewhat reasonable amounts of time? I'm assuming that once we have found the 7th, 8th, 9th, and 10th prime we should have reduced the number of canidates down to a specific few plus at that point we should have the sieve up to several hundred thousand T (for the values over 20,000,000 which will be sieved much deeper i assume being that the amount of work saved for each test is vastly greater). However these few tests will take even much faster computers tha nwe have now years to complete. If we could distribute th work over differant machines then we could work on say 10 tests at a time as a whole and avoid risking loosing large amounts of work to computers who only work forthe project for a few months and then get shifted else where. I guess this is not reall a very important question since it'll be several years before we have the problem but I'd like to think we could hit a couple primes quickly and get finished with the first 20,000,000 i nmuch less time. After all if we were to find only 4 primes then we could eliminate almost half of the total work for the project. Plus membership has been growing and a new and improved client would create tremendous PR for the project.
As for splitting up tests, it is my understanding that all previous data blocks are required to continue work on the next block, leading to a prime or not prime result. What we really want are hardware and algorithm breakthroughs. If we could drop the growth order a bit and let the natural growth of computers go... happy times. Here's to hoping we find a prime soon. We are in the right range right? ... Yep, 6.5-10.
i was actually asking if there were any other prime verifying algorithoms that would be possible to break into pieces? Not taht it is time ot abandon PRP tests now but perhaps there will be a bettter way once we get further down the line.
I think an algorithmic breakthrough, at least complete resdesign will be required once you pass a certain limit - specifically when the size of the number 2^n and intermediate calculation > available memory say a 1,000 Gb (= ram plus swap).
When will we reach it? Will we factor, sieve and prp faster than Moore's law? If not we will hit a limit.
2^18T is truly large number., something 3,000,000 bigger than where we are now. A new algorithm will be required unless we are here a long time.
Dave/louie/mathguy/someone else - what n will the new client be able to calculate up to?
I think our one of our largest speed increases in terms of n increase could come from an incorporation of p-1 testing into the main prp client.
I don't see why this isn't released already... hopefully it will be included in V3.
As royanee said, what the program basically does is iterations - over and over again. And each new one depends on the latter one. Thus, it is not possible to split these tests over several computers. One could transfer the cache files (which quasi is the result of an iteration), but it is not possible to do joint computations on one k/n pair in parallel.are there other types of prime testing that can easily be distributed onto seperate machines and still finish i nsomewhat reasonable amounts of time?
There might be a solution for this, but it hasn't be found so far...
On the other hand, GIMPS tests N's of > 20M (some even > 30M) right now. As they have no K, their computations are a lot faster (e.g. lower FFT sizes have to be taken), but even the fastest current (mainstream) computers need at least 2 weeks for a test, the average should be well above one month, maybe even 2. But it works for them, nevertheless. I think that SoB will have this timely dimension at maybe N > 15M - when one considers current PCs. Regarding Moore's Law, I think it will be a lot more than N > 20M.
Until then, a lot of years will have passed, and I guess no one can plan the development until then. Maybe someone comes up with a great idea, maybe not. Look at MD5CRK, who tried to crack MD5 for several months. Finally, there were some Chinese who were able to crack it in 1 hour!
Memory consumption won't be a problem for long time, as it doesn't take a lot of it to perform the test. AFAIK, it basically depends on the FFT size, plus some additional demands to a certain degree.[b] specifically when the size of the number 2^n and intermediate calculation > available memory say a 1,000 Gb (= ram plus swap).[b]
Until we come into N ranges high enough, standard installed memory size will be very high.
I guess V3 is the reason it isn't included so far. The effort of integrating it into the current client vs. the benefit seems quite low. It is more reasonable to put this effort into the development of the new client.I think our one of our largest speed increases in terms of n increase could come from an incorporation of p-1 testing into the main prp client.
I don't see why this isn't released already... hopefully it will be included in V3.
Apart from that, PRP3 seems to provide a nice speed increase. I've only used it for PSP so far (bugger K's, N ~ 1.2M), but it gives a nice speedup. I'm not sure if this holds true for all PCs or only SSE2-enabled ones, though...
Last edited by Mystwalker; 09-08-2004 at 11:43 AM.
I guess V3 is the reason it isn't included so far. The effort of integrating it into the current client vs. the benefit seems quite low.
Not sure if I can agree with this statement.
I thought there was an advantage of P-1 right now and there always will be. Currently we are P-1'ing ranges of n values however its not like sieve, P-1 does 1 k/n pair at a time. Wouldn't it make sence to have each user first complete a P-1 test on the particular one he is about to test before prping the pair....
Or is your comment related to effort involved in developing the client, if this is true I totally agree.
Yes, but it isn't coded (yet?).Originally posted by vjs
Wouldn't it make sence to have each user first complete a P-1 test on the particular one he is about to test before prping the pair....
sorry if I'm mistaken but isn't the point of the 1-2 value on the P-1 test saying what amount of time to estimate it will take to finda single factor as reguarding the size of the tests its on. Meaning it doesnt actually save any time because it only sets its self to find one factor in the amount of time it would have taken to complete one PRP test. However fi the resources were instead used ot speed up the PRP process then finding a single prime would eliminae thousands of tests. aftr all finding factors doesn't eliminate K values.
Keroberts1
There was a discussion on this in the P-1 forum what I gathered was the following.
The 1-2 value is the the number of tests which would be eliminated by finding a factor for that particular k/n pair. (the consense is use 1.5)
48,49,50 value is the sieve depth, or how far we have sieved. (using 49 is probably best).
What these values do is simply chose the correct b1 and b2 bounds for the test.
The program then thinks about if it's worth doing a P-1, if it's not it won't run.
Try it if you use 1.2 49 for any n <7m the program won't run, you have to use 1.2 48 or 1.5 49.
So the P-1 program does take into account the fact that it must eliminate k/n pairs faster than prp'ing each of them.
I pretty sure at any point if your computer P-1's for the same time it would take to do one prp you will eliminate more than 1 k/n pair.
Depending on your settings, sure, but it can be set to find a factor in less time than one prp test. I think most (if not all) tests done in the p-1 effort already saves time compared to prp.
If we had sieved up to a higher level on the other hand I think even more time could be saved. Don't have any real data on that, but it *feels* so). p-1 speeds things up now, while sieving speeds things up permanently (well, up to 20M atleast).
Re-read the resource allocation model thread for more info.
P-1 and sieving both speed things up, and are valuable. It would be a good idea to do as GIMPS does with the P-1 preceding the PRP. That way the P-1 project could be given automatically to the users. This would end the organized P-1, but the same process would be done by the people doing the PRP tests. Maybe sieving could also be added to the client. Of course, it would probably be best to let that be left to something like the "advanced" tab in Prime95 which needs a password (which is found in the documentation) to unlock. I'm hungry... I'm craving a prime.
The GIMPS client doesn't do P-1 before the Lucas-Lehmer test, it's plain trial factoring. That's possible because they know their numbers only have factors of a specific form.Originally posted by royanee
P-1 and sieving both speed things up, and are valuable. It would be a good idea to do as GIMPS does with the P-1 preceding the PRP.
Originally posted by hc_grove
The GIMPS client doesn't do P-1 before the Lucas-Lehmer test, it's plain trial factoring. That's possible because they know their numbers only have factors of a specific form.From the readme file, line 236, in the prime95 v23.8 zipfile.The P-1 factoring step prior to running a Lucas-Lehmer test is more
effective if it is given more memory to work with.
You bet it does! I've used Prime95 for 2.5 years and it does perform a P-1 if it has not been performed before.Originally posted by hc_grove
The GIMPS client doesn't do P-1 before the Lucas-Lehmer test, it's plain trial factoring. That's possible because they know their numbers only have factors of a specific form.
Just wanted to bring this post back up to the top:
I have a feeling we are going to get a prime soon...
From an old post,
my guess was 67607 by 7250000
Not looking that promising since we are handing out tests around 7.26m currently
well, client can just can upload zfile to server, and server can send it to another client with k/n pair. this will increase traffic slightly but this allow project not recalculate abandoned test from beginning...Originally posted by Mystwalker
As royanee said, what the program basically does is iterations - over and over again. And each new one depends on the latter one. Thus, it is not possible to split these tests over several computers. One could transfer the cache files (which quasi is the result of an iteration), but it is not possible to do joint computations on one k/n pair in parallel.
There might be a solution for this, but it hasn't be found so far...
z0600000 takes 87k
z2500000 takes 300k
z6000000 takes 750k
so 100 intermediate blocks require 75 Mb to upload. hmmmmm.... dunno.
perhaps if it was simply added as a button that says abandon test. click on a icon and it will send in results and release the test back into the ques immediatly. Should be kept in mind for V3
It should be placed into a special queue (or a special attribute should be tagged to the test) so that only people willing to download the cache files really get them...
Meanwhile, save files already are bigger:Originally posted by Death
z0600000 takes 87k
z2500000 takes 300k
z6000000 takes 750k
600K --> 1.2M
2.5M --> 4.1M
6M --> 7.6M
At least file size is only linear dependent on n AFAIK.
So available bandwidth should increase faster than bandwidth need.
Last edited by Mystwalker; 12-02-2004 at 01:33 PM.
Well no one picked 28433, but we are all winners anyway!
Joe O
O.K. I'll guess the next one...
55459 or 10223
by n=3.5M
VJS you just can't help promoting double check and sieving nowhere can you
I'll take my old favorite k=4847 and just to make sure I don't hit another lucky punch let's say N ~ 9.5M
Well good for him... we need more machines on double checks. We're way, way behind there, and falling further behind each day...VJS you just can't help promoting double check and sieving nowhere can you
@Kugano
I have given all my old machines (five P3 400 MHz) machines the QQQseconpass username (moved two to sieving) so I'm listening to vjs. It just takes a while before the QQQseconpass tag kicks in because it takes quite a while to finish their current tests ... so far only 1/5 have started double checks.
With your post giving me the little extra kick I'll also move the little CPU this P4-3GHZ gives to prp to QQQsecondpass.
Frodo, That's why I was hiding behind the couch when I said it
I personally think your doing a great job in factoring and would hate to lose you there. I know I don't support factoring enough, but IMHO that were all my P4's would be... Unfortunately don't have any
Also explaining the sieve how it works and the setup is hard enough, factoring !!!
I generally do try to push alot of commited people like yourself into the subprojects. Basically b/c if anyone is reading this board they are already a step ahead in understanding the projet. Also I'm sure we have alot of set and forget people running main prp so I don't think that effort will ever lag by my persuation.
As for running those P3's in secondpass probably the best place for them if you don't want to fiddle with sieve ranges.
We will also get a better feeling for where secondpass prp should be after kunago works his magic on the error stats.
I may be totally wrong but my guess is second pass should be in the 2.5 +-0.2M range currently.
if you are right would that also mean that there is a bigger chance for finding a prime with QQQsecondpass until we reach n=2.5m ?Originally posted by vjs
I may be totally wrong but my guess is second pass should be in the 2.5 +-0.2M range currently.
You're right, second-pass almost certainly has a better chance of finding a prime than first-pass does right now. And really I think the second pass should be higher than 2.5M... I haven't done the numbers yet, but I'm guessing it's more like 4M, +/- 1M...if you are right would that also mean that there is a bigger chance for finding a prime with QQQsecondpass until we reach n=2.5m ?
Too bad that while each block get the scoringpoints of aprox. 0,00024995TcEM, one block of secondpass test n=1299511 with 9 blocks costs much more time to finish (I test now 0:09:38 on my machine) than one block of firstpass test n=8039645 with 2080 blocks (I calcalate now 0:01:38 on my machine).Originally posted by kugano
You're right, second-pass almost certainly has a better chance of finding a prime than first-pass does right now. And really I think the second pass should be higher than 2.5M... I haven't done the numbers yet, but I'm guessing it's more like 4M, +/- 1M...
[edit] I try to find calculations for finishing one block of test n, but had only results of 7M+ tests.[/edit]
Last edited by Joh14vers6; 01-20-2005 at 12:15 PM.