PDA

View Full Version : Wim22



ace_dent
12-02-2004, 06:29 AM
Hmmm... the current evolution of the WIM circuit has got stuck again (3rd time it's stopped at 0.973!). Maybe this is the best that can be evolved with the current methods...? How long will this run continue?

Cheers,
Andrew

Scrambled Eggs
12-02-2004, 09:55 AM
I was referring to this in my thread about the evaluation function.

The problem as I see it is that the evaluation function is lexicographical and, therefore, discontinuous and ill-behaved. I profered the alternative of starting difficult problems using a continuous evaluation function. The theory of genetic algorithms is well-developed for continous functions. Graphically, imagine trying to find the highest peak in a mountain range. GAs work by starting at several points throughout the mountain range and gradually working their way up. They emulate evolution through mutation and gene-swapping (like sex) to explore the space in the hope of finding that highest peak. Lexicographical functions, on the other hand, have no such geometry. I don't know if any mathematical theory has been developed for maximizing them. Miguel Garvie's practical contribution has been to show that GA can work with lexicographical functions. I think mathematical theory needs to be developed for lexicographical functions. It may turn out that my proposal of starting with a continuous function and finishing lexicographically is theoretically best.

That being said, DHEP is still an experiment. When the lexicographical evaluation function stops working, as it has with WIM Locked, the problem should be set aside and a new one started. By accumulating experience, DHEP can determine the range of circuits that it can optimize lexicographically. That, in itself, is an important piece of knowledge. Then, for a set of circuits that could not be optimized lexicographically, restart them using the "best" lexicographical solutions and a continuous evaluation function. When that function is optimized, take the "best" continuous solutions and optimize them lexicographically.

The practical effect of my plan could be faster production of fault-tolerant circuits, the goal of DHEP.

ace_dent
12-02-2004, 11:58 AM
Hi Scrambled Eggs.

Good to read your comments, esp. your other thread on a continuous evaluation function. I'm not sure whether you're an authority on GA, but I can see a flaw with your suggestion. Using the weightings you described, good [parsimony] would achieve a 'fitter' circuit far quicker than a circuit with good [task fitness]. This might equally result in an evolutionary plateau, but this time with the circuit in a less desireable state: few components but can't perform the basic task. Maybe the answer lies by some combination of these methods? <EDIT: Sorry, this reply is more to your other thread, re-reading your post- sounds like a great suggestion!>

I understand that this is the nature of experimental work- and that a null result can also be useful... but surely we've now established that this is just one of those 'difficult' circuits... can't we move on?

As a side note, when this occurs it raises the question: do evolved circuits achieve the 'best' solution, or like biological evolution, a sufficient solution. To test this would require the whole design space to be explored- a BIG task... but this would be very suitable for the BOINC platform, where different sections of the design space could be broken into work units...

Cheers.

Scrambled Eggs
12-02-2004, 02:12 PM
ace_dent:

I think you misunderstood my continuous suggestion. While it is possible to max out on parsimony in an early stage, a small sacrifice in parsimony can achieve a great gain in task fitness. Task fitness is weighted at least 8 times as much as parsimony. The fittest solutions eventually will max out on task fitness, as that is where the greatest gain lies. The final solutions may or may not be useful - that would be interesting to find out in itself. The second stage then takes the "best" continous solutions and optimizes them using the lexicographical function.

An interesting variation would be to experiment with different weightings in the continuous function. 2 is a nice, logical number for the ratios. Would 3 work better? As the ratio increases, in the limit, you get the lexicographical function. That's the theoretical link between the two.

There does appear to be a link between this problem and bin-packing. See, for example, 2D Rectangular Packing with LFF and LFF/T (http://www.cse.cuhk.edu.hk/~fyyoung/study/CADgroup6.ppt).

Scrambled Eggs
12-02-2004, 02:40 PM
I just did a little experiment with continuous evaluation functions and the statistics reported at The Statistics Topology Page (http://www.informatics.sussex.ac.uk/users/mmg20/dhe/statsTopology.php). With the ratios set to 100, the ordering is the same. With the ratios set to 2, I get slightly different rankings. An interesting case is:


Loose_Self-Checking Stringent_Self-Checking Score_2 Score_100
0.961538462 0.684410646 13.26469728 1009683.826
0.925925926 0.932642487 14.90846287 1009352.524

where Task Fitness = 1, Parsimony = 0, Score_2 is the continuous function with ratios of 2 and Score_100 is the continuous function with ratios 100.

You can see that using 100 for these two produces identical results to the lexicographical ordering. What is interesting is the behavior of Score_2. It has the reverse ordering of Score_100. What it is doing is ranking the circuit with far better stringent self-checking as being better than the circuit than slightly better loose self-checking. I hypothesize that it is easier to tweak the second circuit to improve its loose self-checking without greatly harming (if not improving) its stringent self-checking than to tweak the first circuit to improve either self-checking metric. This is the power of using a continous evaluation function. It is not difficult to see how this generalizes to task fitness and parsimony.

Scrambled Eggs
12-02-2004, 03:20 PM
Originally posted by ace_dent
Hi Scrambled Eggs.

do evolved circuits achieve the 'best' solution, or like biological evolution, a sufficient solution. To test this would require the whole design space to be explored- a BIG task... but this would be very suitable for the BOINC platform, where different sections of the design space could be broken into work units...

Cheers. This is like biology, the goal is to achieve a sufficient solution, that's why a genetic algorithm is used in the first place. It is simply impossible to explore the entire space. Let k be the number kinds of circuit compenents, including the null component, and n be the number of components. Then, disregarding interconnections, there k^n possible circuits. With n large enough, it is simply impossible to simulate all of the circuits. This is further compounded by the very large number of possible sets of interconnects. Thus, the need for a good, nonexhaustive algorithm like GA.

ace_dent
12-02-2004, 06:33 PM
Yep, just done the maths... I think 'BIG' was something of an understatement...

ace_dent
12-03-2004, 06:38 AM
Just noticed a newly evolved circuit has taken the lead... guess I just needed to be more patient! :blush:

Now firing up my crunchers to make amends...