looks like I help finishing stubspace 3 when I look into my logfiles :D
Printable View
looks like I help finishing stubspace 3 when I look into my logfiles :D
got from #18 to #6 in team-stats of OGR-27 :D
:cheers:
during restart of the dnetc-client I noticed the following lines:
[Aug 15 08:33:42 UTC] OGR-NG #a: Loaded 27/1-20-6-31-28-5 (2.38 Gnodes done)
[Aug 15 08:33:42 UTC] OGR-NG #b: Loaded 27/1-20-6-31-22-14* (314.26 Gnodes done)
[Aug 15 08:33:42 UTC] OGR-NG #c: Loaded 27/1-20-6-31-25-11* (87.56 Gnodes done)
[Aug 15 08:33:42 UTC] OGR-NG #d: Loaded 27/1-20-6-31-28-4 (23.86 Gnodes done)
--> does anyone know what the "*" means after the "description" of the OGR-stub? it is not with every stubspace-4-stub... what is special about them?
It could be that on your lines with asterisks, the final numbers quoted (14 and 11) are three digit numbers and so won't fit in the client window (because of the number of columns). I don't know for certain, but that's seems feasible.
edit: if you have client output logging enabled, what does it show for stubs like these?
On the PS3 I'm seeing
PHP Code:
[Aug 15 14:15:17 UTC] OGR-NG #a: Loaded 27/7-53-8-16-32*
[Aug 15 14:19:17 UTC] OGR-NG #e: Loaded 27/7-53-8-17-31*
[Aug 15 14:53:50 UTC] OGR-NG #f: Loaded 27/7-53-8-18-30*
[Aug 15 14:59:34 UTC] OGR-NG #g: Loaded 27/7-53-8-19-29*
[Aug 15 15:01:01 UTC] OGR-NG #b: Loaded 27/7-53-8-20-29*
[Aug 15 15:02:34 UTC] OGR-NG #d: Loaded 27/7-53-8-21-27*
[Aug 15 15:10:31 UTC] OGR-NG #c: Loaded 27/7-53-8-22-26*
[Aug 15 15:38:05 UTC] OGR-NG #e: Loaded 27/7-57-1-39-12*
this is from my dnetc-client running as windows service...
attached find the output of "type dnetc.log | find /i "completed"...
there are some with asterisk, and some not. to me it seems only those with more than 150 stats-units are with asterisks, but I'm not sure.
For completed
PHP Code:
[Aug 15 17:14:18 UTC] OGR-NG #a: Completed 27/7-57-1-40-11* (54.99 stats units)
[Aug 15 17:35:07 UTC] OGR-NG #g: Completed 27/7-58-5-44-2* (38.45 stats units)
[Aug 15 17:36:18 UTC] OGR-NG #f: Completed 27/7-58-5-46-1* (33.48 stats units)
[Aug 15 17:37:41 UTC] OGR-NG #d: Completed 27/7-58-5-45-1* (37.05 stats units)
[Aug 15 17:51:57 UTC] OGR-NG #e: Completed 27/7-60-12-14-23* (56.93 stats units)
[Aug 15 17:58:40 UTC] OGR-NG #b: Completed 27/7-60-12-15-22* (56.97 stats units)
[Aug 15 18:00:45 UTC] OGR-NG #c: Completed 27/7-60-12-16-21* (62.43 stats units)
[Aug 15 18:26:52 UTC] OGR-NG #g: Completed 27/7-60-12-19-18* (56.14 stats units)
I have no idea.
The asterisk only seems to be shown by the client output, it isn't saved in the personal proxy log files so I don't see how it can be anything important.
Do you guys have full client logs? If the asterisk is only shown by the client, could it be something simple like an indicator for stubs which have been resumed from a checkpoint, or something like that?
I have no pproxy to test that... and I don't think an asteriks is an indicator for resumed stubs, it happens all day long. do you want to see a full client log?
:dunno:
I checked before my last post - the pproxy logs show no asterisk for corresponding stubs which DID show one in the client window. Can you post like 12 hours of client log to pastebin or something?
my last 12 hours of dnetc-client-log...
TheJet has kindly explained the asterisk mystery for us.
Quote:
Yes, the '*' indicates a combined stub. They can happen in any stubspace, and they represent the 'end' of a particular series of stubs, the example I was given was the following series of stubs:
27/1-2-4-5
27/1-2-4-8
27/1-2-4-9
...
27/1-2-4-99
27/1-2-4-100*
Where the star represents the set of stubs between, for example, 27/1-2-4-100 and 27/1-2-4-567, simply because each of those stubs is expected to be much shorter than average [MNodes/stub compared to GNodes/stub for instance].
The stubspaces were assigned by a heuristic which was built from a sampling of a bunch of different stubs, attempting to keep all assigned 'stubs' [including combined stubs] to a reasonable size [say <= 1TNode].
not yet, since it is not online with your link - but it sounds ok - that might also be the reason why those results make so many stats-units :D
Nice to know as I wondered about that.
man, you overtook me in no time :cry:
That's what 12 or 13 PS3s will do to you.
the-mk, you should get more ;)
where shall I hide those 13 PS3 in my office? even my quadcore running part-time OGR on my desk does not make my colleagues happy :D
who is "Participant #402,943" and why is he afraid of me?
he added some small firepower to make sure to stay ahead of me...
http://n0cgi.distributed.net/cgi/dne...gi?user=bovine says:
Quote:
:: 28-Nov-2009 19:48 GMT (Saturday) ::
There was a UPS or related power failure at our keymaster that caused
some data inconsistency in our OGR-NG database. As a result, the
project was automatically suspended until manual validation of the
database can be completed.
We hope to be able to bring OGR-NG back online within a day if
everything goes well. Unfortunately, some OGR-NG results that were
returned during this inconsistent period may have been lost.
Meanwhile, RC5-72 is continuing to run and is unaffected. Thanks for
your patience and support!
http://n0cgi.distributed.net/cgi/dne...gi?user=bovine says:
I'm glad that almost none of my work (if any) was lost! :DQuote:
:: 29-Nov-2009 06:15 GMT (Sunday) ::
Our keymaster is now back to normal operations and OGR-NG is sending
and receiving work again. We don't think very many results, if any,
were lost as a result of the outage. The cause of the power failure
is still being investigated. Keep on crunching! ]:8)
:guntotin:
that are some nice numbers: :guntotin:
and what did you today? :DPHP Code:
Overall OGR-27 PProxy stats based on 7 active days:
Smallest: 1.05 Gnodes, submitted on 01/01/2010
Largest: 915.17 Gnodes, submitted on 31/12/2009
Avg. stub: 68.19 Gnodes
Avg. rate: 56,166 G/day, 0.65 Gnodes/sec, 824 stubs/day
Most Gnodes: 125,443 Gnodes, 1.45 Gnodes/sec, on 01/01/2010
Most stubs: 2,370 stubs, 99 stubs/hour, on 01/01/2010
Total: 393,161 Gnodes, 5,766 stubs (0.00095% of project)
:cheers:
stats are down...
http://n0cgi.distributed.net/cgi/dne...gi?user=chrisj
http://n0cgi.distributed.net/cgi/dne...gi?user=bovineQuote:
:: 09-Jan-2010 23:30 CST (Saturday) ::
It looks like fritz (statsbox) is experiencing some residual issues following
the problems within our provider earlier.
We're working to resolve this as soon as possible. All work submitted will be
credited when stats is brought back online.
Thanks for your co-operation and patience. Moo! ]:8)
hopefully they come back soon!Quote:
:: 09-Jan-2010 01:46 GMT (Saturday) ::
There are currently some network connectivity issues to the ISP where
our stats server is hosted. Until our provider's connectivity is
restored to normal operations, the accessibility of our statistics
server may be impacted. Thanks for your patience! ]:8)
http://n0cgi.distributed.net/cgi/dne...gi?user=bovine
Quote:
Statsbox is back online. We used some of the downtime to thoroughly upgrade the OS, software payload, RAID firmware, and add some more RAM. Thanks for your patience!
yeah....... was beginning to wonder if they were ever coming back.
http://n0cgi.distributed.net/cgi/dne...gi?user=bovineQuote:
We're having some connectivity issues with our keymaster server, so
our proxy network is currently holding all submitted results until
connectivity is restored. As a result, stats did not run last night.
We hope to get things back in operational order soon. Thanks!
that's the reason my stats are so low...
They sure seem to be getting into more and more frequent stats issues as time goes by.
Getting lots of small stubs over the last few days. One of which was a new smallest for me, for OGR-27:
How about everyone else?PHP Code:
Smallest: 0.3958 Gnodes (27/1-66-16-3-4-2), submitted on 15/02/2010
Largest: 1,079.32 Gnodes (27/1-20-9-2-24-38), submitted on 08/09/2009
the-mk: I'll try and get a new version of pppla out soon. I'm just trying to tidy up the RC5-72 stuff a bit.
alpha, are you using the free-dc proxy? From the official proxies I'm getting something like 4-10-* to 4-14-*. That's where we should be right now. Perhaps you're getting recycled nodes?
According to the stats (http://stats.distributed.net/project...?project_id=27), we've not yet started the double-check round ("verficiation"). I guess they want to find the "shorter" ruler (if it exists) first, as this should improve the speed of the whole search.
My local pproxy buffers 50 stubs from the Free-DC pproxy and my local clients also obviously have their own buffer. So, it might take a day or two for stubs to get crunched by my clients, and that's without considering how long they are sitting on the Free-DC pproxy.
They definitely aren't recycled stubs because as you already pointed out, we haven't begun verifying the 6-diff stubs yet. Also, I'm definitely not sitting on them long enough for dnet to consider them abandoned and to resend them out to somebody else.
There's no question about whether an optimal ruler exists or not, it's just a case of checking all possibilities and deciding which result is most optimal. Or are you referring to the predicted optimal ruler with 27 marks?Quote:
I guess they want to find the "shorter" ruler (if it exists) first, as this should improve the speed of the whole search.
I don't think the first pass is to make the search quicker though, because they will still run the project until 100% verification of all stubs before announcing the most optimal ruler.
see the other thread; just that you're credited the points doesn't necessarily mean the stub has not already been recycled.
Well, even though we check all stubs, the algorithm should be faster the shorter the shortest-known ruler is. I suppose the algorithm uses some pruning strategy to cut away branches of the tree known not be optimal. Consider a ruler starting with: 1000-4-5-*. It has to be worse than optimal, since even the first two marks are further apart than the shortest known ruler is long (overall). We can ignore 1000-* (and longer) rulers.
Same is true during the test of one ruler: if 10-20-30-40-50-60-70-* is (already) longer than the shortest known ruler, we don't have to check all its children as they can only get longer. This allows pruning during the test.
For a stub we cannot (simply) precalculate where this pruning is possible; this is actually the reason why the subs are different in "size" (the node count represents the number of nodes that could not be pruned and had to be checked).
The only problem with this: for double checking, the node-count per stub is used. If we find a better ruler now, we would still have to use the (currently used) longer one for double-checking of all the stubs crunshed till know as we would else find differing node counts. I don't know how they handle this problem; it has not been a problem for OGR-24, -25 and -26 as we didn't find shorter rulers there.
Perhaps, but they don't recycle stubs after every few days. As I already said in the other thread, I think it is more like a matter of weeks, though they shorten this period as we near the end of a project.
Quote:
Well, even though we check all stubs, the algorithm should be faster the shorter the shortest-known ruler is.
The staff don't progressively check to see whether a participant has returned a stub which is more optimal than the predicted one, because the cut-off point for stub checking is hard-coded into the algorithm of the cores: http://faq.distributed.net/cache/229.html That's why it doesn't cause a problem.Quote:
The only problem with this: for double checking, the node-count per stub is used. If we find a better ruler now, we would still have to use the (currently used) longer one for double-checking of all the stubs crunshed till know as we would else find differing node counts. I don't know how they handle this problem; it has not been a problem for OGR-24, -25 and -26 as we didn't find shorter rulers there.
So even if somebody today returned a stub which was more optimal than the predicted one, the remaining search would still take just as long as before.
Well, IF a shorter stub was found, it would be reasonable to change the hard-coded value in the code (to improve speed). Given the information you quoted, that means that new clients have to be downloaded. Would be too much hassle, I suppose.
Yeah, even though it could significantly reduce the amount of work required to attain project completion, it wouldn't really be feasible to expect every single client to be upgraded.
Also, you'd have the problem which you already touched on earlier - verification is based on the resulting number of nodes. Therefore, verification would have to be done by either one algorithm or the other (predicted optimal or current optimal), depending on which one returned the stub the first time.
that's how my pproxy looks like currently:
usually there is not much traffic in this thread, now you are stressing me :DPHP Code:
Overall OGR-27 PProxy stats based on 53 active days:
Smallest: 0.4683 Gnodes, submitted on 09/01/2010
Largest: 1,230.03 Gnodes, submitted on 14/02/2010
Avg. stub: 82.00 Gnodes
Avg. rate: 28,330 G/day, 0.33 Gnodes/sec, 345 stubs/day
Most Gnodes: 127,960 Gnodes, 1.48 Gnodes/sec, on 01/01/2010
Most stubs: 2,400 stubs, 100 stubs/hour, on 01/01/2010
Total: 1,501,470 Gnodes, 18,311 stubs (0.00303% of project)
once again, stats are down...
http://n0cgi.distributed.net/cgi/dne...?user=mikereed
Quote:
:: 10-Mar-2010 00:28 GMT (Wednesday) ::
Our stats system is down while we attend to some unscheduled maintenance. We
hope to have it back online shortly.
Thanks for your patience and continued support! ]:8)
--> smallest and mosts stubs :DPHP Code:
Overall OGR-27 PProxy stats based on 107 active days:
Smallest: 0.2060 Gnodes, submitted on 25/04/2010
Largest: 1,230.03 Gnodes, submitted on 14/02/2010
Avg. stub: 76.41 Gnodes
Avg. rate: 25,561 G/day, 0.30 Gnodes/sec, 335 stubs/day
Most Gnodes: 127,960 Gnodes, 1.48 Gnodes/sec, on 01/01/2010
Most stubs: 4,553 stubs, 190 stubs/hour, on 24/04/2010
Total: 2,734,979 Gnodes, 35,792 stubs (0.00591% of project)
Nice job, the-mk! I haven't crunched OGR for a little while, but now you make me want to try and beat your records. ;)
BTW, if you haven't already seen, pppla v0.91 is now out.
thanks, I have seen the new version! output looks great!
distributed.net client is great - set it and forget it ;)
so beat my records :D
I'm getting closer..
so watch out. :)PHP Code:
Largest: 1,157.12 Gnodes (27/4-3-17-12-14-44), submitted on 06/05/2010
something my personal proxy produced - today... but the day is not over yet :D
also with todays overall stats - http://stats.distributed.net/project...?project_id=27PHP Code:
Overall OGR-27 PProxy stats based on 156 active days:
Smallest: 0.2060 Gnodes (27/5-68-1-3-6-2), submitted on 25/04/2010
Largest: 1,230.03 Gnodes (27/4-16-5-14-17-38), submitted on 14/02/2010
Avg. stub: 74.89 Gnodes
Avg. rate: 27,080 G/day, 0.31 Gnodes/sec, 362 stubs/day
Most Gnodes: 127,960 Gnodes, 1.48 Gnodes/sec, on 01/01/2010
Most stubs: 4,987 stubs, 208 stubs/hour, on 13/06/2010
Totals: 4,224,429 Gnodes, 56,412 stubs (0.00932% of stubspace)
not bad :guntotin:PHP Code:
Projected Completion Times
Daily Stubs Projected End Date
1 Day Average 510,946 04-Apr-2013
3 Day Average 417,065 21-Nov-2013
7 Day Average 336,114 20-Sep-2014
14 Day Average 340,487 31-Aug-2014
30 Day Average 252,093 22-Feb-2016
Well I am taking a break from Seventeen or Bust for a bit.
THE-MK , keep up that production and you will pass me before long :thumbs:
that's what I could do on weekends... during the week I can't keep the output that high...
but it is my goal to pass you and be the #1 on OGR-27 stats of the team :D
not in June, maybe July or August, since I'm on vacation for one week beginning on Friday
ppla from yesterday:
--> most stubs were done yesterday :DPHP Code:
Overall OGR-27 PProxy stats based on 157 active days:
Smallest: 0.2060 Gnodes (27/5-68-1-3-6-2), submitted on 25/04/2010
Largest: 1,230.03 Gnodes (27/4-16-5-14-17-38), submitted on 14/02/2010
Avg. stub: 74.37 Gnodes
Avg. rate: 27,201 G/day, 0.31 Gnodes/sec, 366 stubs/day
Most Gnodes: 127,960 Gnodes, 1.48 Gnodes/sec, on 01/01/2010
Most stubs: 5,540 stubs, 231 stubs/hour, on 13/06/2010
Totals: 4,270,610 Gnodes, 57,422 stubs (0.00949% of stubspace)
if you produce the 94,000 GNodes every day you will stay for sure #1 on the team stats :D
:bang:
Welcome to the club GURU :cool:
I'm skerred. :thumbs:
oh my god - look at this output! :scared:
:roadkill: