Some of the proteins were not very close (visually) to the 'real' structures, but they were almost all recognizably similar visually. We aren't trying to find the exact structure in microscopic detail (they do that in labs with x-ray crystallography or some other stuff I am not allowed to attempt to pronounce, spell, or understand).

We are trying to find (and prove) a method for taking an unknown protein and 'predicting' its structure with a certain degree of accuracy. From what I have gathered, NONE of the protein structure prediction methods (Folding@home) and DF included, should be expected to produce an EXACT result. Lab work would still need to be done for the exact result to found (even this might not technically be 'exact').

Based upon my limited understanding of the science involved, all of the results (not every single one, but the best of each protein run) in Phase IA were good enough to be of use. For example, trying to sort/categorize/organize/identify the proteins discovered as a result of the Human Genome Project. You obviously can't do lab work on every single one of them simultaneously, so you either find ways to "sort through" them or you sit around waiting while the limited lab resources are used to find the structures of all of these unknown proteins.

DF also appears to have a good apparatus for implementing very fast tests of both sampling and scoring methods as new methods are discovered/created/shared/etc.

Basically, they have a platform that allows them to test sampling and scoring methods very quickly (a very large number of iterations, a very large "sample set" in a very short amount of time). As the samping and scoring methods become more and more accurate (and robust) the amount of more direct science that they can do with the project should increase.

These are all my (long-winded) guesses, btw, not anything I took from a paper or am quoting. (would be a very bad paper or a very dumb scientist if this was the case).

=)