I agree. We've done enough of checking the holes back there, and we don't need to do that for the time being.
Printable View
I agree. We've done enough of checking the holes back there, and we don't need to do that for the time being.
moment of sieving glory. Two factors have popped out in the last 15 minutes on the two machines i have present at my home. Both have been in the active range and have N's within 1,000 of each other. What are the odds of that. I'm not sure cause i already submitted the factors adn don't wantto dig for the file again but i think they were both the same K too.
I think we still need the sieving speed of both computers to calculate the odds.
456.430T 24737 7031623
454.876T 27653 7030689
Very cool and about 130K points as well
:drums: sieve main effort +400T :|party|:
:elephant: prp almost at 700m :bouncy:
:thumbs: Double check supersecret almost at 1m :cheers:
And P-1 has a little padding at 700m :smoking:
I think this is a milestone hopefully we will have a prime shortly.
My bets are still on for 67607 by 7.25m.
about 450 K and 550 K adn just happen to be my only machines sieving at the moment
I hope there are no theoretical errors in my calculations...Quote:
Originally posted by Keroberts1
moment of sieving glory. Two factors have popped out in the last 15 minutes on the two machines i have present at my home. Both have been in the active range and have N's within 1,000 of each other. What are the odds of that. I'm not sure cause i already submitted the factors adn don't wantto dig for the file again but i think they were both the same K too.
about 450 K and 550 K adn just happen to be my only machines sieving at the moment
Assuming a factor density of 1 per 30G, I came to the conclusion that the chance of finding a factor within the active range within 15 minutes on the 450K machine is roughly 1:7.000.
The chance the other machine finds a factor within a "N distance" of 1.000 within 15 minutes seem to be ~1:570.000.
So, when you pick some specific 15 minutes, the chances are 1:3.990.000.000. If you are not picky about the time these to events have to take place (although they have be within 15 minutes to each other), I guess the probability should be 1:285.000 - the likeliness the faster machine finds a suitable factor within a 30 minutes time frame (15 min before, 15 min after) of the other factor finding.
I guess you mean 7m, don't you? Otherwise, I seem to have missed something... :DQuote:
Originally posted by vjs
:elephant: prp almost at 700m :bouncy:
And P-1 has a little padding at 700m :smoking:
My bets are still on for 67607 by 7.25m.
Concerning the next prime, I think it will be 7.4M< N < 7.5M, as this is the range of numbers that will likely be (completely) tested end November / early December, SB's prime finding season. :p
I plan to resieve this range of mine:
186.11G really seems a little too much. :( Are there any reasons against this?Code:RATIO: 338245.56G - 338431.67G, size: 186.11G, est fact: 7 (338245560022561-338431671692333) R: 1.042, S: 0.323
#*# ( 186.11G) : 337500-340000 Mystwalker [complete]
I won't start the sieve within a day...
Not as a reason against resieve, but there's an excluded factor in between ( 338315236675243 55459 115090 519 9 e), which implies that you sieved the range.
I guess it was bad luck on your side to hit a hole that large.
186.11G really seems a little too much. I generally keep all of my factXXX.txt files until after mike has cleared the range or I have no gaps etc.
You might want to look for these files and submit them as well.
This is part of the reason why I also submit factrange and factexcl, I'm not sure how these files work on the server stand point but I would think that it would help prove that this range was done correctly and reduce the gaps etc.
I'm sure there are cases where large gaps occur by chance hence the words "acceptable gaps". Some gaps have to be larger than acceptable even if done correctly.
I also noticed
http://www.aooq73.dsl.pipex.com/gaps...n3_20_p01u.htm
You have some smaller "acceptable gaps" if your splitting ranges you may wish to check if all of these large gap ranges are done with one particular machine, retest on another etc.
There was an issue in the past where someone had an o/c'ed or overheating machine that was missing factors, so this could be your case???
Let us know what happens, at this point I wouldn't redo the range unless you can actually attribute all large gaps to one particular machine, or can somehow document the fact it wasn't done.
If you do decide to redo, how much of the range to you redo?
The gap may be 338245.56G - 338431.67G, but maybe you missed 338250-338500 entirely and someone else submitted a factor at 338431.67G for example.
Since the lowest reported gap for that region is 106G your gap could actually be upto ~290G.
In any respect keep up the good work!!!
Looks like Nuri beat me to the punch on your question.
May I sieve it? Or rather... now that I'm more than 1/3 of the way through sieving it... is it okay if I finish it, and somehow attribute the results to you? (If I am not logged in, you get the results, don't you?)Quote:
Originally posted by Mystwalker
I plan to resieve this range of mine:
186.11G really seems a little too much. :( Are there any reasons against this?Code:RATIO: 338245.56G - 338431.67G, size: 186.11G, est fact: 7 (338245560022561-338431671692333) R: 1.042, S: 0.323
#*# ( 186.11G) : 337500-340000 Mystwalker [complete]
I won't start the sieve within a day...
No problem, just continue. If you find a factor, you can claim the score for yourself if somehow possible. I think that's just fair..Quote:
Originally posted by royanee
May I sieve it? Or rather... now that I'm more than 1/3 of the way through sieving it... is it okay if I finish it, and somehow attribute the results to you? (If I am not logged in, you get the results, don't you?)
Did you already find something?
I'm quite sure I did that range on the cluster, which means the range was split in 10G parts and distributed other ~30 PCs...Quote:
Originally posted by vjs
if your splitting ranges you may wish to check if all of these large gap ranges are done with one particular machine, retest on another etc.
There might have been a problem when collecting the results, though. I'm unaware of such a glitch (I only once lost some results when deleting them before submitting :bang: , but I immediately noticed it), but I won't bet my life on it. ;)
What program(s) are you using? Which version[s]? The reason that I ask is that some versions of SoBsieve had problems. Look at my last post in this thread.Quote:
Originally posted by Mystwalker
I'm quite sure I did that range on the cluster, which means the range was split in 10G parts and distributed other ~30 PCs...
Maybe 1 factor (could be excluded since .dat creation) and maybe 1 out of range n = ~115kQuote:
Originally posted by Mystwalker
No problem, just continue. If you find a factor, you can claim the score for yourself if somehow possible. I think that's just fair..
Did you already find something?
All use proth_sieve 0.42 Linux version.Quote:
Originally posted by Joe O
What program(s) are you using? Which version[s]?
I don't understand...Quote:
Originally posted by Moo_the_cow
Why not?
Keroberts and I are tring to finish everything between 450-500 before the main effort catches up... Yes its a silly goal etc doesn't matter all slow machine sieve etc, but hey it's fun that's why were here.
450002-460000 keroberts1
460000-470000 VJS [Complete] 337 factors
470000-470005 KenG6 (ETA: mid Jul) (pm sent Aug 30th)
470005-480000 VJS (ETA: Late Sept)
480000-481000 Moo_the_cow
481000-490000 VJS
498000-504000 Complete
I know your big in factoring and contribute etc, so I was wondering if your still working on the range, just have it reserved etc,
According to
http://www.aooq73.dsl.pipex.com/ui/2537.htm
and
http://www.aooq73.dsl.pipex.com/gaps...n3_20_p04u.htm
You havn't submitted anything so I was wondering.
I asked 550000-550500 Mystwalker about his, he said using it as a backup etc which is great.
I actually have one machine working on
560000-562000, but it won't finish for months and big chances I'll never able to recover those results even if it does finish it's range so it's not posted/reserved nor should it be.
(Snipped /ceselb)
So is why not? yes I'm working on it? No it's mine? Yes you can have it, etc.
Thanks for your time.
(quote removed /ceselb)
I meant that I didn't see why you thought that I wouldn't finish the range. I haven't submitted any results, because I usually submit all my results at the same time when the entire range is done. Unlike some people, I don't submit factors at the end of every day.
I don't mind if you want some of my range. If that's the case, take 480500-481000, and I'll just sieve 480000-480500.
I usually do at the end of the sieve, unless I am checking the fact.txt file and see on with n near 6m or 7m, etc. so that I can save someone a proth test. :) But that's pretty rare for me.
Thanks Moo,
I didn't mean to insinuate that you were not going to finish etc (Although I guess I did) should have asked about your progress instead. I'm generally more like royanee I check the fact file and submit when I have factors near the double check bounds or within 200K of prp. Might just save a factor effort as well.
However doing >300G per day combined :D with all my machines generally means I have at least one factor a week within the bounds, Looking at sobistrator right now I have a factor around 7.08, ~7.2, and :trash: ~6.65. But nothing urgent so I'll probably submit on monday.
Thanks for everyones efforts Moo, ceselb, Mike.
P.S. There is an very off chance I might get a true quad 2.8 HT Xeon 4mb, for a week or two, might be my first decent factor machine I could play with.
if you are looking at sobistrator and see new factors -
WHY THE H8L you don't press submit button??????? :crazy:
why???? why???????? :cry:
Not all of my machines are networked... so dividing and submitting a range once or twice a week insures I don't miss any portions of a range etc.
Don't worry I make sure those factors close to prp doublecheck or garbage get submitted :thumbs: and simply use sobistrator to monitor the activity "new factors", not all of my computers are on a network.
Been quite a bid of activity on secret, supersecret and garbage lately is that you death???
Perhaps we should move all of this to the discussion.
nope. that's somebody else... i'm currently stuck with boinc lhc@home. they only have a 2000 users to test client and I'm on board.
maybe l8r...
and you say that sobistrator is on a machine w/o network am I right? because in other case I just can't understand how you can look at sobistrator and don't press submit button? =)) or just turn on auto-submit feature..
and don't worry about discussion. I bet ceselb kill our posts ina couple of days... =))
I don't know if this question was also directed towards me, but the answer is that I'm too lazy to even check fact.txt for any new factors.Quote:
Originally posted by Death
if you are looking at sobistrator and see new factors -
WHY THE H8L you don't press submit button??????? :crazy:
why???? why???????? :cry:
Actually, I don't think I have enough computing power to finish the 480000-481000 range before the main sieve effort catches up, so you were probably right.Quote:
I didn't mean to insinuate that you were not going to finish etc
Good luck in finishing your 450-500T sieve effort :)
Found one:Quote:
Originally posted by royanee
Maybe 1 factor (could be excluded since .dat creation) and maybe 1 out of range n = ~115k
338.322T 24737 15174463
As you said, probably a small chunk was lost for some strange reason. But a missed factor isn't such a bad thing as a missed prime. :)
Curious about how the increase in the prp/p1 client speed effects sieve effectiveness.
It was stated somewhere before that due to factor density for a range, sieving speed, prp time per n, possible primes, double checks, duplicte factors etc, it was still efficent to sieve to some very high T much greater than present T.
Just wondering how a much faster P1/prp client changes sieve.
Quite a bit is my guess. :D
I don't know really, but maybe nuri can do some calculations. :notworthy
I'm also throwing this out there for consideration, after the next prime is found and we create a new data file.
It might be benifital to remove an additional k :eek: (i.e. the prime and another)
For example the largest K ???67XXX?? takes something like 15% of the processing time of sieving but only contributes 5% of the eliminated factors, (I know these numbers are off), but something like this might be of importance. Or breaking the sieve into two sections two data files dunno. Might be more effort than it's worth, or counter productive.
i don't think that the efficiency is effected much at all because the new client is only good for sse2 machines which shouldn't have been sieving anyways. These machines would have been best suited for P-1 and now for prp. The prp boundary moving faster only means more of our factors will enter the active range quiker. The only thing that effects sieve efficiency that i see revelant here is the prp depth. As th prop depth gets farther along the number of factors that will have already been passed by the fron end of the first checking phase will be greater hence more wasted factors.
There is work being done on a non-SSE2 version too. That will have more effect.
True, but without big performance increases. It's just like a junction, directing SSE2-capable CPUs to the new code and all the others to the old code.Quote:
Originally posted by ceselb
There is work being done on a non-SSE2 version too. That will have more effect.
The enhancements are SSE2-specific and mainly the result of George generalizing GIMPS-specific code to general purpose code (SoB, Riesel, PSP, 321, ... benefit from it).
it has been said that there will be non-sse2 versions being released too.
death is always correctQuote:
Originally posted by engracio
SOLD!!!!! for 5 cents. Since mike is not saying anything (thinking it does not matter to him either way, better safe than sorry) I will upload the other two files as I complete the range. Tha fact.txt is uploaded as soon as it is found. Thanks for the info, Death guess you were correct. :) :) :)
e
:swear: :swear: :taz: :taz: :cage:
Arrrrgh just submitted two factors around 7.05n and 7.08n, they both popped up last night. A day or two late I guess
Just to stir up a little excitement in the sieve and P-1 area, challanges back and forth here is what happened.
Way to go sievers.... :cheers:
dmbrubac,
Just letting you know I found this factor in your range:
481790636706229 | 24737*2^7263031+1
What a coincidence
Sieve 480500-490000 VJS
P-1 7262000 7280000 dmbrubac 0 [reserved]
7th dmbrubac ... 3326948.25
8th vjs ............ 3142839.40
I feel as thought I'm going to steal 7th from you with this factor worth about 80k.
I was looking at this range, I know it is way way ahead of where we are but the expected factors is very high per 1000G
1763615.23G - 1812168.61G, size:48553.38G, est fact:11846
244.0 factors per 1000G
Alot higher than one of my previous ranges
460000G - 480000G, size: 20000G, factors found 689
34.45 factors per 1000G
Two things
First, the predicted factors per G must be incorrect,
Second, I tried sieving the range but the client wouldn't work, is there a maximum G the client will accept???
Thanks,
VJS
The program can only sieve up to 2^50, which is just a little bit smaller than 1125900 G
Just wanted to let everyone know I'm getting a slight slow down on my ~550T ranges. I remember my complete range at 2^49 (560T) also being a little slower
I was wondering if anyone else is seeing the same effect...
I guess this is directed mostly toward Nuri, Keroberts, a mklasson
yes I actually did see this effect. Not sure about thereason that would probably be a question for MKlasson
Just finishing up my 550T range - sieving speed seems to be ~10% lower than it used to be, but I strangely don't encounter the 50% slowdown anymore I had earlier...