my logcompressor.sh has a line with "gzip $1" in it - result is a i.e. "pproxyogrng20110322.log.gz" file... but was just a suggestion to implement that one
the-mk
Nope, I took my (puny) main machine off OGR when the new Folding@home 7 open beta started. I'll be hunting you at a rate of 1.5T per day though!
BIG NEWS, PLEASE READ: http://blogs.distributed.net/2011/04/11/15/12/mikereed/
So basically the sooner you update your clients, the better. But you should certainly do it before the second pass of stubspace 4 begins.
What they failed to mention is that we will have to do a third pass of stubspaces 1-3 (~6.5 million stubs), so the project has just been lengthened.
Hi
I tried the new .518 [x86/Stream] on my old Linux box but that didn't work.
The [x86/ELF/uclibc] is still the older one .517 dated 2010-06-28
Guess I stick to the old Linux client until it's updated
OK, switched to the prerelease [x86/ELF/uclibc] v2.9110.519, seem to work
only a few days away from team-rank #8
the-mk
wow, today we are so close - 311,000 GNodes
hopefully tomorrow we will have #8
the-mk
We got it!
Next up is #7 and #6 which are both around 74 million Gnodes away. We're outproducing #7 2-fold and #6 5-fold based on yesterdays production. Beyond that is our good friends Ars, but they have a substantial gap.
the-mk
Second pass of the stub space or at least recycling of yet unfinished stubs seems to have started. I just got several 1-* range stubs (1-2-4-18-20-36 and similar).
Engage!
based on http://stats.distributed.net/project...?project_id=27 we already start to verify stubspace 4!
the-mk
Yes, obviously yesterday was the first day of second pass :-) Interesting thing about the numbers: as only 233.000.000 of 295.000.000 stubs (of first pass) are finished, 62 million stubs are either still in some buffer or were discarded. That's about 20%. I had expected that number to be lower ...
Engage!
I found that odd as well. The last nodes I received prior to this second pass were 27/41- which was Thursday of last week. I knew we were moving quickly, but I was expecting a few more days of small stubs. I first thought maybe there are personal proxies out there stuffed with small stubs, but the past few days have not shown signs of this work returning. It makes me wonder if they pulled off early or plan to mix some of the very small stubs in with the large, second-pass stubs. As of yet, no signs of that either.
Yoyo@Home likely has a fairly large buffer, but certainly not enough to account for that..
Well, Bok, if anybody used a huge buffer, we would still see a higher ammount of stubs i guess. But just as hobbes said, the number of stubs processed dropped from "huge" to "tiny" within a day or two, indicating most existing buffers ran out quite quickly and got the new (huge) stubs.
Engage!
Sorry to spam you guys a little, but I finally got around to updating pppla with a pretty massive haul of new features. The link is here. Please give it a try if you run a pproxy!
To keep this on topic, we did eventually secure position #7 and we are still plodding along at a reasonable pace. Looks like we lost a bit of steam, but there is still plenty of time left.
sounds scary
the-mk
Been getting some big stubs this month, three of which made it to my all time top 10 for OGR-27:
#2 - 1,098.91 Gnodes on 13/11/2011
#8 - 1,013.64 Gnodes on 12/11/2011
#10 - 1,000.03 Gnodes on 13/11/2011
You guys had any big ones?
To justify what I said in my last post, here's my new biggest!
#1 - 1,186.69 Gnodes on 15/11/2011
two days in a row Guru is making more than 1,600,000 GNodes
scary
does he want teamrank #8 back?
the-mk
This thread is still alive Cool
Agent Smith was right!: "I hate this place. This zoo. This prison. This reality, whatever you want to call it, I can't stand it any longer. It's the smell! If there is such a thing. I feel saturated by it. I can taste your stink and every time I do, I fear that I've somehow been infected by it."
seems to be worth making it alive again after 9 days in a row with Guru making 1,600,000 GNodes a day
the-mk
Recently, there has been a blog entry by Mike Reed: http://blogs.distributed.net/2013/11/16/01/38/mikereed/
Engage!