I recently dug out a photo of a set of books recommended to me by Steve Rayhawk. Steve uses probably the largest vocabulary of concepts of anyone I've met and I'd quite like to know where they come from, so I thought I'd give it a read.
Core Thesis
First, the basis thesis. The following passage contains perhaps the most condesnsed description of what the book is trying to get it. It has just spent 20 or so fascinating pages explaining the way that the particular shape of this large, single celled organism develops according to the operation of simple field equations which lead to a limited number of complicated-looking shapes. In particular, the surface
From this perspective, the Dasycladales constitute a natural group not because of their history but because of the way their basic structure is generated. The historical sequence in which different species have evolved is of considerable interest, for it may tell us something about their neighborhood relations in the space of parameters (genes) that define the domains leading to different forms. But we can make sense of this history only within the context of a morphogenetic theory describing how the different forms are generated. This is a theory of what Stephen Jay Gould has called morphospace, the space of possible morphologies for species organized according to certain principles. Our theory suggests that whorls of laterals arising from an unbranched stalk are typical of the order, because this is an intrinsically robust, generic form that the morphogenetic field of this type of organism generates.
There's clearly a lot of interesting material here and the book is certainly worth reading for the way that it presents its examples: focussed or specific, biological examples but with a reasonable degree of mathematical precision applied them. I wo't try to cover them here.
These concrete examples are used to grind the book's favourite axe, which is the way in which the gene-centric approach to understanding orgnaisms and evolution, summed up in the following quote:
Genes do not control; they cooperate in producing variations on generic themes.
Unfortunately, on this subject is writing more as a polemicist than a scientist and doesn't take much trouble to try and head of potential counterarguments. For example, it's often repeated that during the growth of an organism there is a limited set of options which define the morphogenetic field, and genes are limited to deciding which of these limited routes. What is rarely mentioned is that the options are defined by a chemical context, which is itself regulated by genes - so there are a subset of genes which influence which of a limited set of stable attractors are reached, but set of possibilities are themselves determined, and so it's not clear that there are any known limits on the possibilities that these fields can offer. What perhaps is or could be shown is that most of the variation happens at this second level rather than at the primary level at which the nature of the morphological field. Perhaps gradual change at the underlying level is particularly difficult and so such shifts are rare - but this is not demonstrated. (It also remains to be shown how representative this of the huge field of morphology.)
The gene-minimizing perspective was particularly interesting in the case of single celled organisms where the protein-crystal structure of the flagellum (bacteria tail) was dependent not just on the existing genes but the seed provided by the parents organism (and thus on genes for potentially many generations ago). Goodwin I suspect has felt highly vindicated by the discovery of methylation and epigenetic effects in general. As with these effects, though, the magnitude of their importance is (as far as I know) unclear and I would lean towards fairly limited.
Reminds me of an old and visceral realization that came from studying biology and gametogenesis in particular: that there's never a moment in development when everything is in some kind of state of rational control where the genetic plans are carefully consulted and the organism puls itslef out of nothingness by its genetic bootstraps. Instead there's only ever a series of cascading catalysations, stretching all the way back to the first replicator. Of course it could never really be otherwise but it was a rather unnerving feeling to understand it viscerally. The image that sticks in the mind is of a boulder bouncing down the side of a cliff, forever.
Tetrapod limbs are defined as the set of possible forms generated by the rules of focal condensation, branching bifurcation, and segmentation in the morphogenetic field of the limb bud. All forms are equivalent under transformations that use only these generative processes. With this we arrive at a logical definition of tetrapod limbs that is independent of history. The idea of a common ancestral form as a special structure occupying a unique branch point on the tree of life ceases to have taxonomic significance. Now tetrapod limbs could have arisen many times independently in different lineages of fish, and they still would be equivalent as long as they were made in the same way, whereas in a Darwinian (historical) taxonomy independent origins mean basic difference. To see how this may have happened, we need to examine how fish fins and tetrapod limbs are related to one another.
Suggests a greater likelihood of similar evolutionary paths being followed on other worlds, especially if similar chemistries are involved but I think is suggest of more similar development even in more radically different scenarios.
Emphasis is on robust procedures, ones that can be repeated over and over with the same result, and how this limits the forms created, such as the animal bones only splitting once (though how does this match the fish shapes?)
What represents a test for such a perspective? Perhaps one might imagine repeated tests of bacteria exposed to various antibacterial agents and track how long on average it takes for immunity to develop - with the idea being that a random walk in the space of overall forms would have a smaller spread of times than a random walk in a more stylized morphogenetic field where some forms are far more easily accessible than others. I feel here that I don't really have the context to judge the extent of factual disagreements here, or whether its entirely a matter of new (at the time) developments and issues around framing. Goodwin is rather unhelpful in this area.
Neo-Darwinism focuses on selection as the primary source of biological order, as we have seen, organisms being essentially survival machines. Kauffman questions this, asking whether there is not a rich source of emergent order that is available for free in complex systems of the type encountered in biology, wherein many units interact in simple ways.
One very interesting thing that it brings out is the way that evolution rates change over time. I've never really understood what is meant when a species like an nautilus or a coelacanth has remained unchanged for millions of years - in the length of time that mammals have gone crazy: is it just that they look pretty much the same but there still a lot of internal development? I suppose that the genetic data only goes back a million years or so at most but that's still time for a fair bit of study. A quick search so far hasn't provided what I was hoping for.
There's undoubtedly going to be times when there's a lot of free niches and so a ot of room to grow into, rapidly changing ecology and this would lead to a lot of new speciation and change. This seems to be suggesting that there's another underlying process which is the development of particular robust and fruitful generating forms which lead to a whole space of possible organisms and leads to greater change and new emerging species.
The analogy in ML systems would be that you have a learning rate (rate of mutations), high training loss (new enviroments) but also required are particular functional forms which are capable of accurately modelling the situation. In this way it is reminiscent of the lottery ticket hypothesis which are particular subnetworks of neural networks which when trained can perform as well as the overall network. This kind of thinking moves us away from the idea thta neural networks are just constantly taking little steps in function-space towards one with high success but instead succeeds because particular subsystems allow rich veins of progress. This rings true for my (fairly limited) experience in training complex ML models where, firstly, initialization is key, but even amongst models with identical initialization methods, some models simply take ages to take off, before suddenly rocketing up to match the performance of other runs.