PDA

View Full Version : Normalized statistics



trif
04-08-2002, 12:48 AM
Since the proteins take varying amounts of time to fold, I would strongly suggest that the stats system start normalizing the amount of credit to a benchmark protein. Especially before you start putting very large proteins out there. It's not a big deal if things vary by 10 or 20 percent, but when things differ by orders of magnitude, you may find that odd behavior develops among stats conscious participants.

MAD-ness
04-08-2002, 07:17 PM
You have been listening to Larry too much! ;)

At this point, as we are still in the testing stages of the project, I do not see the lack of normalization as a problem. After the current protein we will begin much larger runs on much larger proteins. I am assuming that the trend will be towards larger and larger proteins (not a direct increase at all times, but a general trend towards larger protein sizes). So, the lure of waiting for a 'fast' protein might very well be suicidal rather than 'cunning.' With more of the major teams getting involved with DF, I don't think that further down the road, a single team (however large) will be able to sit out month long runs and then pick up the slack by allocating massive amounts of boxen on the project for the next protein. Ars, DPC, HardOCP, DSLR they might all be able to swing a couple hundred GHZ for the span of a month, but if a protein is THAT small, the other teams would most likely be doing the same thing, thus "normalizing" the stats in a somewhat "natural" way.

Until it proves to be a barrier to further growth of the project, I think the varying structure computation times of different proteins is not something worth worrying about or investing a lot of effort into.

SETI doesn't normalize for different angle ranges, do they? A WU is a WU and over time it 'smooths' out.

Shaktai
04-08-2002, 09:48 PM
Originally posted by MAD-ness
You have been listening to Larry too much! ;)

At this point, as we are still in the testing stages of the project, I do not see the lack of normalization as a problem. After the current protein we will begin much larger runs on much larger proteins. I am assuming that the trend will be towards larger and larger proteins (not a direct increase at all times, but a general trend towards larger protein sizes). So, the lure of waiting for a 'fast' protein might very well be suicidal rather than 'cunning.' With more of the major teams getting involved with DF, I don't think that further down the road, a single team (however large) will be able to sit out month long runs and then pick up the slack by allocating massive amounts of boxen on the project for the next protein. Ars, DPC, HardOCP, DSLR they might all be able to swing a couple hundred GHZ for the span of a month, but if a protein is THAT small, the other teams would most likely be doing the same thing, thus "normalizing" the stats in a somewhat "natural" way.

Until it proves to be a barrier to further growth of the project, I think the varying structure computation times of different proteins is not something worth worrying about or investing a lot of effort into.

SETI doesn't normalize for different angle ranges, do they? A WU is a WU and over time it 'smooths' out.

Yeah! What he said. :D

FoBoT
04-09-2002, 08:25 AM
normally i would agree that a "normalization" factor would be a good thing

the problem is that we have already started, so to apply it now would be problematic and to re-troactiviley (-2 speeling) apply it would be very problematic

i think the rules/formulae needs to be laid out ahead of time, not after the fact

so i would leave it be

also, in the case of DF , the massive number of stats may tend to dampen annomolies, perhaps

trif
04-10-2002, 12:28 AM
Heh, I think maybe Larry would think I don't listen to him enough.

Larry took something that the Dutch Power Cows started and tried to refine it a little, and that's trying to figure out the total firepower of various teams, even spread across different projects. It intrigued me, and I want to take it further, to the point of setting up "SuperStats" across all projects. I intend to bench all the projects under controlled circumstances on the same machine. Many projects have WU's that vary, but they're varying all the time so that it tends to even out. If I bench DF's current protein and then equate 1 million structures to so much 1GHz CPU time, it won't remain true for each succeeding protein. If the next protein takes 20% longer, then the output of any team in DF will appear to fall immediately by 20%. The only other alternative is to bench again each time a new protein comes out, and well, I think that is better done by the project's management, not me. With a project like SETI, the WU's may vary, but their distribution stays the same over time.

I know that DF is already underway, but the project is very young as projects go, and that is why it would be most important to do it now. As time goes by, the contribution of "unnormalized" structures will become more and more miniscule. SETI's WU's used to take only half the time they do now, and they didn't change the "point value" to match, but the old WU's are becoming less and less of a factor as time goes on.

A lot of people will focus on the "dailies" or daily output of the top SuperTeams. If DF is a major contributor to any teams statistics, then whenever there is a protein change, it will change the "dailies" considerably. One team could end up passing another solely because of this artifact from DF, and there will be griping about that. Teams that don't care so much about their standing in DF compared to the "SuperStats" will flood into DF whenever there is an "advantageous" protein, and flood out to other projects when there is a "disadvantageous" protein. It will amplify the time differences between protein changes. A fast protein will get its 10 billion or 100 billion structures much faster than expected, and a slow protein may take much longer to complete because the heavy hitters abandon for a more advantageous project. I don't particularly like this kind of stats driven behavior, which is why it is better to have credit that corresponds fairly well to the actual amount of CPU time expended.

Even if I don't do the SuperStats, somebody else will. There is starting to be quite a bit of interest in it, which means that someone will do it.

[Ars]KD5MDK
04-10-2002, 03:21 PM
Well, I just don't see it as that big of a deal.

As long as all teams are equally affected here, I don't think it really matters. Now, SuperStats will get a little borked, but I'm afraid that doesn't really totally bother me. Here's another way to do it that sort of covers the issues:

Run your stats as percentages of daily output. This way it doesn't matter if a protein is fast or slow, the proportion of total output by a team should stay the same. How does that sound?

Angus
04-10-2002, 05:16 PM
This discussion was predictable...

http://groups.yahoo.com/group/distributedfolding/message/220

xj10bt
04-10-2002, 05:52 PM
Angus <--- posts link to Yahoo

BAD ANGUS, BAD! :spank:

Angus
04-10-2002, 07:51 PM
Hey... worked for me :jester:


But for those who are Yahoo impaired...



March 9,2002

Just wait until you start the next proteins....

If they take a longer or shorter time to crunch, the stats crazies
will be up in arms to get 'stats weighting' implemented, citing the
unfairness of getting the same credit for longer and shorter proteins.

Those discussions went on for months in the other folding projects,
with some teams even quiting the project over the issue.

Have fun...

OldBiker
Free-DC

MAD-ness
04-10-2002, 07:56 PM
While I am interested in the work that Larry has done and your plans are very exciting, I don't really see it as something anyone here (participants or leaders) ought to be overly concerned about.

People will misbehave regardless of how you organize things, it doesn't seem very pragmatic to me for the project leaders to re-order things just to make things easier on teams from other projects. If major teams join the project for a 'faster' protein (computation speed is dependent on more things than just the number of residue) then that protein will be finished pretty damn quickly and a new protein will be begun.

Over the long haul, the people who want to run DF will run DF. If large DC teams want to screw around and ramp up at certain times and down at others, how and who are we to stop them?

I do not think that over the long term the stats will be quite as erratic as people seem to believe. The sheer number of results returned coupled with the fact that EVERYONE changes proteins at the same time will 'smooth' out the production trends.

The funniest thing about this talk is that each WU is psuedo-RANDOMLY generated. On any given protein, on each machine, each structure is computed using a random seed. There are no standard or semi-standard structures, let alone proteins. This makes a precise measurement of production and resource allocation essentially impossible. However, the number of structures returned 'smooths' out the variance amongst structures generated to the point where I doubt anyone really notices (unless they are literally watching the structures be built).

I think that we could get together some people and come up with some sufficiently accurate methods for tracking stats in relation to allocated resources (i.e. number and type of boxen).

ColinT
04-10-2002, 10:54 PM
Ye Olde Biker knows what he's talking about. Stats weighting is always futile.

We've been there and done that and then puked.

Paratima
04-10-2002, 11:11 PM
Yepper! Weighting is one of the reasons F@H is so screwed up.

And UD generated a ton of bad feelings with their scheme for weighting by box configuration, with a separate scheme, if you can believe it, for weighting by time-to-complete. GACK!

We got a good thing going here. Let's try real hard NOT to f*ck it up.

trif
04-11-2002, 02:30 AM
Don't get me wrong. I *hate* stats driven behavior. But if you don't account for it, you can end up paying the price. I would hate to see the organizers of this project end up missing a deadline because they put up a big protein and all the superteams thundered off to another project that would give more bang for the buck. It is not for my convenience that I bring this up, but because I *know* the kinds of problems this can cause.

I am not talking about variability between random folds. Like SETI WU's or distributed.net's OGR project, this tends to work out in the wash. It's when the variability is predictable and manipulable that the problems come in. If Stanford hadn't normalized their WU's, they'd be finding an awful lot of the big ones ending up expiring because the min/max'ers would find a way to dump them in favor of the smaller ones.