Did you ever Notice that the unfortunate hardware failure's always happen on a Holiday weekend :rock:
Printable View
That is simply proof that demons are real. ;)
Either that, or it has something to do with people's brains trying to find patterns in everything. Your observation might occur for the same reason that white fluffy clouds can look like cows and choo-choo trains, or whatever.
can't live without stats!! :cry:
you should ensure you are using
pproxy.free-dc.org:2064
I know, but I want to know how many stubs we have completed the last few days and I want to know when the project ends.
Don't we all.
Let's just call it a good old 1920s style cliffhanger.
a wee dance of celebration
:banana::banana::banana:15 million :banana::banana::banana:
yeah! happy 15 millions! I'd like to say 15 millions more, but the project should end soon...
:thumbs: :cheers:
Hey, am I missing something? I remember your guys' stats page used to have stuff like projected scores, upcoming threats, and so on, but I can't find that now. I even created an account in hopes of finding that functionality there. :)
And while I'm at my first post, I'll send a shout out to my stats neighbor Paleseptember (I'm Eriol) - every time I see your name, I get the Fiona Apple song stuck in my head.
welcome,
you don't run ogr by any chance do you :Pokes::whistle::whistle:
Thank you! And yes, I do indeed. I'm one position behind one of your own team members.
http://stats.distributed.net/partici...d=25&id=424559
http://stats.free-dc.org/stats.php?p...eam+Beef+Roast
wrong team :D
No, he means he is one position behind in the overall stats.
682 Paleseptember
683 Eriol
hmm, it looks like the overtake user stats aren't working for OGR at least (might be for all non-boinc projects).
I'll take a look.
Ok, have another look. Was a problem with the id in some of the sql when there is html formatting.
All fixed.
Bok
on the subject of stats.......
in aps@home, it shows team 20. Free-dc is 18
:whistle::whistle:
Fixed. It will roll into live soon.
no no, I'm right :D after seeing that link:
http://stats.free-dc.org/stats.php?p...eam+Beef+Roast :D I think there should be Free-DC...
whatever... :cheers:
Ahem
FREE-DC -- In Excess of 700 Million Gnodes!!!!
An awesome effort one and all! <insert emoticons to your heart's content here>
That's quite a lot!! Great job everybody!
:cheers:
@ LeeF (aka, Eriol) I'm catching up to you!! You may have out-produced me the last few days, but one (very huge) day has got me back within a hair's-breadth of you :)
Thanks for dropping by LeeF! Welcome to the most friendly team forums with the most kickarse stats :D I hope that you get involved in our community. It's always good to meet new people with a common goal of conquering the world :)
Paleseptember: Excellent! We've been breaking pretty even for at least a couple of weeks now.
And thanks for the welcome! I hope you guys are able to hold off those part-timers over at Yoyo and Linux-de.
One thing I've never understood..
Look at the following where I've ran the script one hour in between..
[root@quad dnetproxy]# date && ./csv4.pl
Sat Sep 13 11:17:31 MST 2008
This might take a while... please be patient.
Overall OGRp2-25 PProxy stats based on 257 days:
Smallest: 0.01 Gnodes, submitted on 03/02/2008
Largest: 2,821.39 Gnodes, submitted on 17/02/2008
Average: 97.26 Gnodes
Total: 326,262,947.98 Gnodes
Rate: 1,269,505.63 Gnodes/day
Stubspace 1 stubs: 3
Stubspace 2 stubs: 39
Stubspace 3 stubs: 259
Stubspace 4 stubs: 2,910
Stubspace 5 stubs: 31,462
Stubspace 6 stubs: 1,921,308
Stubspace 7 stubs: 1,163,716
Stubspace 8 stubs: 216,177
Stubspace 9 stubs: 18,517
Total stubs crunched: 3,354,391 (0.55016% of project)
You have new mail in /var/spool/mail/root
[root@quad dnetproxy]# date && ./csv4.pl
Sat Sep 13 12:23:35 MST 2008
This might take a while... please be patient.
Overall OGRp2-25 PProxy stats based on 257 days:
Smallest: 0.01 Gnodes, submitted on 03/02/2008
Largest: 2,821.39 Gnodes, submitted on 17/02/2008
Average: 97.25 Gnodes
Total: 326,331,305.61 Gnodes
Rate: 1,269,771.62 Gnodes/day
Stubspace 1 stubs: 3
Stubspace 2 stubs: 39
Stubspace 3 stubs: 259
Stubspace 4 stubs: 2,910
Stubspace 5 stubs: 31,462
Stubspace 6 stubs: 1,921,339
Stubspace 7 stubs: 1,164,708
Stubspace 8 stubs: 216,199
Stubspace 9 stubs: 18,529
Total stubs crunched: 3,355,448 (0.55033% of project)
You have new mail in /var/spool/mail/root
[root@quad dnetproxy]#
How come there were 31 stubspace 6 stubs processed where there are only 5 left in the entire space? Does this mean they are redundant and not scored?
I don't think the proxy is set too high, seeing as 30,000 or so are being done per day from it.
They may be from me, Bok. I just hooked up a few machines that were offline for about 3 months that probably had old stubs. You aren't going crazy or anything :crazy:
maybe, maybe, but I've actually noticed this a number of times over the past few weeks where we show more stubs in space6 than were done for that day...
I'll try and check it tomorrow too..
Bok :cheers:
The key phrase is "three months" (I think, a brief search of the faq-o-matic has turned up nothing).
I think the time-out on a ogr workunit is 90 days.
Team FDC to the end... :clap: joined :)
Only just joined so it will take the next rollover or 2 to show me, darn it why can't they just send you out your password to you automatically!
Nope, I submitted a bunch yesterday and got full credit for them today. As paleseptember said, there is supposedly a timeout for stubs, but I've never breached it even at the worst of times. However, each stub any of us submits which has already been crunched and verified by other people does not contribute to project completion, which won't do us any favours. It definitely is in our best interests as a team to have a fresh in-buffer of work.
As the project has neared the end I have periodically reduced the size of my pproxy cache to combat this very point.
I assume you're running your own pproxy? If so, edit the proxyper.ini file and under the [ogrp2] tag change the minkeysready and maxkeysready values which represent the minimum and maximum cache size (in stubs) respectively. My pproxy serves stubs to about half a dozen machines and my current min/max level is 150/200. I'll reduce it further once we get heavily into stubspaces 8 and 9. Like this:
If you're just running the dnet client on its own with no pproxy, you need to edit your dnetc.ini file. Under the [ogr_p2] tag, change or add a value for fetch-workunit-threshold which represents the maximum cache size in stubs. Like this:Quote:
[ogrp2]
minkeysready=150
maxkeysready=200
Quote:
[ogr_p2]
fetch-workunit-threshold=50
Wouldn't know a pproxy if it bbit me on the bbutt. Using Bok's pproxy.
There is a line such as you describe:
[ogr]
core=-1
fetch-time-threshold=0
fetch-workunit-threshold=3
Should I change this? Will it be wonderful? :confused:
I don't think it would matter in your case as you are using a proxy (the free-dc one).
I might have to look into changing the size of the proxy buffers here, but I generally try to keep 10 days worth right now.
Bok
Thanks, Bok. I was kind of getting that idea myself. HOWEVER, if anyone can see anything that I could improve here, let me know:
dnetc.ini
[parameters]
[email protected]
[misc]
project-priority=OGR-P2=1,RC5-72=0
[triggers]
exit-flag-filename=exit.now
pause-on-no-mains-power=no
restart-on-config-file-change=yes
[display]
progress-indicator=auto-sense
[ogr]
core=-1
fetch-time-threshold=0
fetch-workunit-threshold=3
[processor-usage]
priority=0
[networking]
autofindkeyserver=no
keyserver=pproxy.free-dc.org:2064
nofallback=true
dialup-watcher=disabled
interfaces-to-watch=*
disabled=no
[buffers]
threshold-check-interval=0:60
checkpoint-filename=check.txt
frequent-threshold-checks=25
[logging]
log-file-limit=500
log-file=dnetc.log
log-file-type=fifo
Seems to be crunching along nicely.
Here's mine ;)
[parameters]
[email protected]
[misc]
project-priority=OGR-P2=1,RC5-72=0
[networking]
autofindkeyserver=no
keyserver=pproxy.free-dc.org:2064 ; use Bok's pproxy server
nofallback=false
dialup-watcher=active
interfaces-to-watch=*
disabled=no
[triggers]
exit-flag-filename=
pause-on-no-mains-power=no
restart-on-config-file-change=yes
[display]
progress-indicator=auto-sense
[OGR-P2]
core=-1
;fetch-time-threshold=24
fetch-workunit-threshold=24 ; for quads, 12 for duals
[rc5-72]
core=-1
;fetch-time-threshold=24
fetch-workunit-threshold=24 ; for quads, 12 for duals
[processor-usage]
priority=3 ; make em work harder like on crunchers. In linux 0 = nice -19.
max-threads=-1
[buffers]
threshold-check-interval=0:15
checkpoint-filename=check.txt
frequent-threshold-checks=3
[logging]
log-file-limit=1
log-file=dnetc.log
log-file-type=fifo
Cool. Done. Which, btw, I hope we are, nearly. I've got some other projects calling for my attention...
:smoking:
What the heck...
Either my script is broken or dnet have decided to recheck some stubs.
Today so far, I've crunched through:
Actually now that I've looked into it, it might be a box I upgraded to a pre-release client version earlier today.. strange. I'll keep an eye on it.Quote:
Stubspace 2 stubs: 2
Stubspace 4 stubs: 2
Stubspace 5 stubs: 4
Stubspace 6 stubs: 21
Stubspace 7 stubs: 103
Stubspace 8 stubs: 32
Stubspace 9 stubs: 3
Was just going to post the same thing.... I guess they've come from you then :)
[root@quad dnetproxy]# ./csv5.pl 20080928
This might take a while... please be patient.
Overall OGRp2-25 PProxy stats based on 1 days:
Smallest: 0.11 Gnodes, submitted on 28/09/2008
Largest: 953.71 Gnodes, submitted on 28/09/2008
Average: 64.31 Gnodes
Total: 1,010,507.42 Gnodes
Rate: 1,010,507.42 Gnodes/day
Stubspace 2 stubs: 2
Stubspace 4 stubs: 2
Stubspace 5 stubs: 6
Stubspace 6 stubs: 297
Stubspace 7 stubs: 13,485
Stubspace 8 stubs: 1,823
Stubspace 9 stubs: 99
Total stubs crunched: 15,714 (0.00258% of project)
Bok
Well, I got full credit yesterday for everything that passed through my pproxy:
Quote:
Total: 14161.66 Gnodes
So it was all legitimate and valid as far as I can tell. Very strange.Quote:
lets revive the old one ;)
I had to take one machine down from OGR because of such a message
Feb 9 21:34:53 webserver kernel: CPU0: Temperature above threshold
Feb 9 21:34:53 webserver kernel: CPU0: Running in modulated clock mode
darn...
You could always specify the number of cores to run on with the -numcpu flag. That way, you could just run on 50% of the cores and keep the others idle, or put them on a less stressing project to keep the temps down a bit.
I've done this in the past, but usually because I want one core on OGR and another core on a different project.
the problem is: this is a single core cpu :D
maybe it is the same problem as half a year ago: broken heatsink mounting...
it's time to buy a new machine for the office webserver... it has been running for 2 to 3 years now with dnetc :)
told my colleague to check what's going on with that machine, because I'm on a business trip...
problem was that someone put a cardboardbox in front of that computer, so it could not get enough fresh air to cool the cpu... after putting it away the computer runs fine again :)
this computer is for company's use, and I don't want to put much money in a company's computer --> no Phenom or whatever...
OGR-27 stubspace 3 as almost crunched - http://stats.distributed.net/project...?project_id=27
does someone know how I can identify such a workunit (stubspace 3, stubspace 4) in the dnetc-logfiles?
Stubspace 1 stubs are 3-diff.
Stubspace 2 stubs are 4-diff.
Stubspace 3 stubs are 5-diff.
Stubspace 4 stubs are 6-diff.
In your pproxy log files, something like 27/1-3-66-18-28 is 5-diff which is stubspace 3 because there are five numbers separated by dashes after the "27/". Similarly, 27/40-1-26-59 is 4-diff which is stubspace 2, and so on.