Feeds:
Posts
Comments

Archive for the ‘Complexity’ Category

When complex systems are built, they’re built out of parts, and the parts come in multiple types.

For electronic circuits, any circuit can be built from a finite, universal set of component types. Does biology follow this “universal language” approach?

Well, no, as I discuss in this paper. And it turns out not even electronic circuits — as they exist in the real world, built by real companies — follow the universal-language approach.

And neither do Legos.

In both biology (networks of cells, neurons, or ants) and artifacts (networks of Legos, circuit components, or people), as the network gets larger, its division of labor increases (and as a power law).

But there are key differences as well that I discuss in the paper: Roughly, biological networks carry out their functions with more of their parts than human-created networks.

Samuel Arbesman, senior fellow at the Kauffman Foundation, has written a piece about this at WIRED. Roger Highfield also has written a piece on it at the Telegraph, this one aimed more at what’s gone wrong with Legos. And I’ve written a piece for Discover Mag on this what-happened-to-Legos issue.

~~~

Mark Changizi is Director of Human Cognition at 2AI, and the author of
Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man and The Vision Revolution. He is finishing up his new book, HUMAN, a novel about our human future.

Read Full Post »

Humans are hive enthusiasts. We love social insects like ants and bees, and we pay extra attention to Star Trek episodes when the you-will-be-assimilated Borg are featured. But what exactly is so interesting about hives? They’re interesting to us because, en masse, they amount to a superorganism, with analogs to organisms at the genetic level, reproductive level, and the behavior level. Also, just as larger, more complex, organisms tend to have a greater number of specialized cell types, larger ant colonies tend to have a greater number of “ant types” (see the figure).

And in new research this week in the Proceedings of the National Academy of Science, Chen Hou from Arizona State University found that the metabolic rates of ant colonies follow the same law – Kleiber’s Law – that solitary-living insects follow concerning how metabolism scales with body mass. Metabolically, colonies act like superorganisms rather than just big groups of organisms. Ant colonies really are organisms.

Why, though, should we find that fascinating? If ant colonies are organisms, then they should get imbued with the same level of interest we find in the average organism. To understand what makes social insects especially exciting, we must get inside our own heads, and the perceptions we evolved to possess.

In addition to the perceptions you may have heard about – like color, motion and form – we have intrinsically much more complicated perceptions. Face recognition is one example, but relevant to our purposes here is the perception of “animacy”. Certain stimuli elicit perceptions in us of there being a living, animal-like, thing. Even a simple square moving about can elicit this kind of perception, so long as it moves in a sufficiently animal-like fashion.

But there is one thing our “animal perception” requires that ant colonies and the Borg violate: Animals must be solid objects. To be an animal, our perceptual system demands that the constituent cells must be physically connected, not merely be informationally connected. Hives do all the requisite information interconnectivity without having to be physically touching, and although that is a difference that makes no computational difference, it makes all the difference in the world to our perception.

Our perceptions of animacy lead us to the conclusion that the ant drones are the animals, not the colony itself. That is what makes social insects so interesting: social insects are cases of animals that don’t fit our evolved perceptual expectations for animals. What makes social insects and the Borg so interesting is, then, more about our perceptual apparatus than it is about the intrinsic coolness of ants or assimilation.

And if hives can be exhilarating to our brains for perceptuo-cognitive reasons, then we can exhilarate in the other direction. Rather than concentrating on the animal-hood of colonies, let’s ask about the colony-hood of animals. Animals are, after all, massive colonies of cells. The problem with thinking of animals as cell colonies is that even if we could see individual cells, cells don’t have the animal-like properties ants do, and thus cannot tap into our animal perceptions.

Or can they? Cells do often behave in an ant-like animal fashion, but just move too slowly for you to perceive their animal-likeness. When one views videos of cells in a a growing animal, and the video allows us to see the individual cells moving, animals begin to look instead like colonies of cells. Take a look at the second and third video here — http://pr.caltech.edu:16080/events/kulesa/ — showing cell movements in the developing chick embryo from the laboratory of Paul Kulesa. (In fact, download the videos to your computer first, and then play, so that you can make the video larger.) You will see, especially in the third movie, individual cells poking about, and at this spatio-temporal scale your animal perceptions are activated, and the chick is no longer an animal, but is perceived instead as a colony of single-celled animals.

This first appeared on January 24, 2010, as a feature at the Telegraph.

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).

Read Full Post »

There is an apocryphal story about a graduate mathematics student at the University of Virginia studying the properties of certain mathematical objects. In his fifth year some killjoy bastard elsewhere published a paper proving that there are no such mathematical objects. He dropped out of the program, and I never did hear where he is today. He’s probably making my cappuccino right now.

This week, a professor named Peter Sheridan Dodds published a new paper in Physical Review Letters further fleshing out a theory concerning why a 2/3 power law may apply for metabolic rate. The 2/3 law says that metabolic rate in animals rises as the 2/3 power of body mass.  It was in a 2001 Journal of Theoretical Biology paper that he first argued that perhaps a 2/3 law applies, and that paper — along with others such as the one that just appeared — is what has put him in the Killjoy Hall of Fame.  The University of Virginia’s killjoy was a mere amateur.

Peter Sheridan Dodds, Buzzkill

The 2/3 scaling law, you see, is intuitively obvious (even if not intuitively obvious to truly defend in detail). The surface area of animals scales as the 2/3 power of their body mass, and so the rate of heat loss scales as the 2/3 power. If metabolic rate scaled as the 2/3 power, few theorists would probably have bothered taking the problem on.

But in the 1930s one Max Kleiber accumulated data that suggested to him that metabolic rate scales as the 3/4 power of body mass. It came to be known as Kleiber’s Law. 3/4 is fun. …to a theorist. 2/3, however, is boring. 3/4 is so fun that theorists had a field day trying to explain it, and there was an especially gigantic spike in the fun starting from 1997, especially from a series of papers by West, Brown and Enquist, and also by Banavar and Maritan.

And that’s when buzzkill Dodds came along with his 2001 paper. He re-examined the data, and suggested that a 2/3 law could not be rejected. There may be no 3/4 law to explain after all. Nothing to see here, move along everyone. That paper further put salt in the wound by pointing out that one of the theories deriving the 3/4 law had an error.

Although Dodds is still at it with his current paper, to compensate for his party-downer laurels, he’s accumulated some of the most interesting research out there, from rivers to bodies to disease to the happiness of songs over time. (Thanks, Peter, for being a good sport.)

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).

This first appeared on February 9, 2010, as a feature at ScientificBlogging.com.

Read Full Post »

This first appeared on November 23, 2009, as a feature at ScientificBlogging.com, coincidentally aligning with the 150th anniversary of The Origin of Species.

“Come on into the hot tub,” I told my three year old boy. But he wouldn’t budge. No way was he joining his older sister in there. “It’s warm, and it feels nice!” I urged, “There’s nothing to be afraid of.” But it was only when I turned off the jets that I could eventually coax him in.

“Why would my boy be so afraid of a hot tub?” I wondered. But as I reflected upon my panty-waist boy, I decided that perhaps I wasn’t being fair to him. In fact, in hindsight, I think he was behaving rationally. Hot tubs are frightening. They violently churn and bubble, as if they are actually boiling. I have spent so much time in hot tubs over the years that I now hardly notice the foam, the burning temperature, the Pseudomonas bacteria and the skin-ripping, high-pressure jets.

We get used to things, and not just to jacuzzis. My jacuzzification also happens for intellectual matters (a topic of an earlier piece, The Value of Being Aloof: Or, How Not to Get Absorbed in Someone Else’s Abdomen). One generation’s jacuzzi is another generation’s maelstrom.

In particular, we get used to evolution. We scientists, especially. We’re so accustomed to evolution that when we find skeptics of evolution, we think of them as poor, blind, close-minded saps who can’t see the most obvious truths.

Darwin's jacuzzi

But how obvious is evolution, really? And how close-minded are those who don’t yet accept evolution?

Let’s start with the obviousness of evolution. First and foremost…evolution ain’t obvious! Evolution is perhaps the craziest true theory ever!   “Let me get this straight: Add a teaspoon of heritable variation, a ton of eating one another, and epochs of time…get yourself a superzoo of fantastically engineered creatures. Yeah, that’s not crazy!”

The only reason most of us scientists don’t find evolution crazy is that we’re jacuzzified to a wrinkley pulp. And this level of comfort with the bizarre theory of evolution can be counterproductive when trying to explain evolution to the uninitiated. You won’t convince my three-year-old to get into the hot tub by suggesting that there is no bubbling or churning – he can see the bubbling and churning with his own eyes. (BTW, no intent to analogize evolution skeptics with three-year-olds! Just a useful analogy that popped up.) If you’re so jacuzzified that you fail to see the churning, you will be incapable of addressing the real worry: that the churning might hurt.

Similarly, if you’re so used to evolution that you fail to see how weird it is, you’ll be in a poor position to explain why it isn’t as crazy as it at first sounds. Better to say, “Yes, evolution is crazy, but there’s overwhelming evidence that it is, indeed, the mechanism underlying the emergence of life in all its glory.” (And you should also admit that, although we have mountains of evidence that evolution is the mechanism, we are very far from understanding how exactly it does it, just as we’re sure the brain underlies our thoughts but do not comprehend how the brain works. This was the topic of an earlier ScientificBlogging.com piece titled Is Evolution Fast Enough?’ How I Responded.

The fact that evolution wins the prize for “non-obviousishness” should already begin to change one’s view about the supposed close-mindedness of evolution’s skeptics. Evolution is extraordinary, and extraordinary theories take extraordinary evidence. Extraordinary evidence indeed exists, but you can’t communicate the evidence in a simple one-liner. (Much less in a one-liner addressing the other as a “close-minded sap”.)

Religious folk surely have their hang-ups (whereas I am utterly hang-up-less), but religious doctrine has come a long way over the centuries. Few still believe the Earth is at the center of the universe, for example, something that was once perhaps just as central to the religious world view as creation. But the evidence for the Earth not being at the center is overwhelming. And more important than being overwhelming, the idea that the Earth is not at the center of the universe is not nearly as crazy as evolution.

Religion can, then, be convinced of scientific discoveries it is initially opposed to. And, it is reasonable to expect that the more intrinsically implausible a theory sounds, the longer it will take for religion to become convinced. Evolution is the king of the implausible, and perhaps that’s why it is one of the last major scientific truths not having infiltrated all the corners of religion.

But evolution won’t infiltrate religion if we scientists can’t address the skeptic’s worries. And we won’t be able to address the worries if we’re so overcooked in evolution that we are incapable of seeing just how preposterous it seems.

====

There were some interesting comments at ScientificBlogging.com, which can be read here. One quote worth repeating here is a response of clarification of mine:

“For the Grand Canyon, I can see how more and more erosion, with self-organizing drainage networks, leads to deeper and deeper and wider and wider etc., etc., etc.

But imagine that I told you that, after all that erosion, the result wasn’t the Grand Canyon, but a modern football stadium, with seats, bathrooms, flat field, fake grass, box seats — the works.  That is, imagine after more and more blind activity, one gets a highly engineered complex structure that *does* amazing stuff.”

That’s what makes the hypothesis of natural selection so crazy. I’d go so far as saying that if you don’t appreciate how crazy it is, you don’t really get it.

 

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).

Read Full Post »

Originally a piece in ScientificBlogging, September 28, 2009…

You open your dictionary to figure out what your friend meant by ‘nasute,’ only to find that the definition is “A wittol, or jemadar; bannocked in an emunctory fashion.” What good is this dictionary, you wonder, if it only refers me to other words I don’t know? And worse still, the definitions of some of these words refer back to ‘nasute,’ the word you didn’t know in the first place! Even if your attempt to learn what ‘nasute’ means is not infected by circularity, you face a quick explosion of words to look up: the words in the definition, the words in each of these definitions, and so on. The dictionary appears, then, to be a terribly messy tangled web.

In reality, however, dictionaries aren’t quite that worthless. …and the definition of ‘nasute’ above is, thankfully, fiction. The standard line for why dictionaries are useful is that the typical users of dictionaries already know the meanings of much of the dictionary, and so a disorderly dictionary definition doesn’t send them on an exploding wild goose chase.

Dictionaries would, however, be only a source of frustration for a person not knowing any of the vocabulary. And, therefore, dictionaries – and the lexicon of language they attempt to record – can’t be our mental lexicon. If a word is in our mental lexicon, then we know what it means. And if we know what it means, then our brain is able to unpack its meaning in terms it understands. The brain is not sent on a wild goose chase like the one fated for a Zulu native handed the Oxford English Dictionary.

Compared to the disheveled dictionary, the mental lexicon is much more carefully designed. The mental lexicon is hierarchical, having at its foundation a small number – perhaps around 50 – of “semantic primes” (or fundamental atoms of meaning) that are combined to mentally define all our mental concepts, something the linguist Anna Wierzbicka has argued. And our internal lexicon has a number of hierarchical levels, analogous to the multiple levels in the visual hierarchy or auditory hierarchy.

The “visual meaning” of a complex concept – e.g., the look of a cow – gets built out of a large combination of fundamental visual atoms, e.g., oriented contours and colored patches. In the same way, the (semantic) meaning of the concept of a cow gets built out of a large combination of fundamental semantic atoms, e.g., words like ‘you’, ‘body’, ‘some’, ‘good’, ‘want’, ‘now’, ‘here’, ‘above’, ‘maybe’, and ‘more’. In both sensory and semantic hierarchies, the small number of bottom level primes are combined to build a larger set of more complex things, and these, in turn, are used to build more complex ones, and so on. For vision, sample objects at increasingly more complex levels include contours, junctions, junction-combinations, and objects. For the lexicon, examples of increasing complexity are ‘object’, ‘living thing’, ‘animal’, ‘vertebrate’, ‘mammal’, ‘artiodactyl’, and ‘cow’.

In our natural state, the mental lexicon we end up with would depend upon our experiences. No rabbits in your locale, no lexical entry in the head for rabbits. And the same is true for vision. The neural hierarchy was designed to be somewhat flexible, but designed to do its lexical work hierarchically, in an efficient fashion, no matter the specific lexicon that would fill it.

There is quite a difference, then, between the disordered, knotted dictionary and our orderly, heavily optimized, hierarchical mental lexicon. Language’s vocabulary determined by cultural selection – and whose structure is partially measured by dictionaries – does not seem to have harnessed the lexical expectations of our brain.

However, are dictionaries really so tangled? Back in 2005 while working at Caltech, I began to wonder. Dictionaries surely do have some messiness, because they’re built by real people from real messy data about the use of words: so some circularities may occasionally get thrown in by accident. But my bet was that the signature, efficient hierarchical structure of our inner lexicon should be in the dictionary, if only we looked carefully for it. Language would work best if the public vocabulary were organized in such a way that it would naturally fit the shape of our lexical brain, and I suspected cultural selection over time should have figured this out. …that it should have given us a dictionary shaped like the brain: a braintionary.

So I set out on a search for these signature hierarchical structures in the dictionary. A search to find the hidden brain in the dictionary. In particular, I asked whether the dictionary is hierarchically organized in such a way that it minimizes the total size of needed to define everything it must.

To grasp the problem, a starting point is to realize that there is more than one way to build a hierarchical dictionary. One could use the most fundamental words to define all the other words in the dictionary, so that there would be just two hierarchical levels: the small set of fundamental (or atomic) words, and the set of everything else. Alternatively, dictionaries could use the most fundamental words to define an intermediate level of words, and in turn use these words to define the rest. That would make three levels, and, clearly, greater numbers of levels are possible.

My main theoretical observation was that having just the right number of hierarchical levels can greatly reduce the overall size of the dictionary. A dictionary with just two hierarchical levels, for example, would have to be more than three times larger than an optimal one that uses around seven levels.

Via measurements from and analysis of WordNet and the Oxford English Dictionary, in a paper I published in the Journal of Cognitive Systems Research I provided evidence that actual dictionaries have approximately the optimal number of hierarchical levels. I discovered that dictionaries do have the structure expected for the brain’s lexical hierarchy. Dictionaries are braintionaries, designed by culture to have the structure our brains like, maximizing our vocabulary capabilities for our fixed ape brains.

What it means is that language has culturally evolved over the centuries and millenia not only to have the words we need, but also to have an overall organization—in terms of how words get their meanings from other words—that helps minimize the overall size of the dictionary. …and simultaneously helps us efficiently encode the lexicon in our heads.

The journal article itself can be linked here.

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).

Read Full Post »

For The Quarterly Review of Biology

Review of Melanie Mitchell (2009) Complexity: A Guided Tour (Oxford University Press, Oxford).

Complexity – what is it, and does it matter? Melanie Mitchell, a denizen of the community of complexity researchers provides an engaging introduction to the many interdisciplinary issues surrounding attempts at understanding how fantastic holistic attributes  can arise from teems of underwhelming components. …how minds arise from simple neurons, and cagey ant colonies from embarrassingly thick-headed individual ants. The book is primarily aimed for the undergraduate or high school student, covering nearly all the first-order facets of complexity within dynamics, information, computation, evolution, and networks. But even researchers familiar with the traditional stomping grounds may enjoy many of her meta-discussions on where the field of complexity stands today, and whether it is progressing or dying. Melanie Mitchell’s book may itself exemplify a very good reason for maintaining “complexity” as a monicker for the suite of disciplines it unites: books like hers may be pegogically useful for the growth of young scientists. First, the topics taken up in her book are exciting to newcomers, tapping into the romance of science many of us researchers once had (and now struggle to recall). Second, the issues in complexity require interdisciplinary training, which may serve to motivate students to get interdisciplinary training, something they will never regret wherever they end up in science (and odds are they won’t end up in “complexity” proper). Third, an introduction to the problems under the heading of complexity helps put students in a non-reductionist mindset, so that when such complexity-fed students land in traditional scientific disciplines, they push their fields toward the development and testing of large-scale, unifying theories. If Melanie Mitchell’s book were required reading for undergraduate freshmen, I would anticipate a large surge in the number of students interested not only in complexity, but interested in science more generally. And not just more students, but students more exercised about what may lie ahead as they attempt come to grips with nature.

Mark Changizi is Professor of Cognitive Science at RPI, and the author of The Vision Revolution (Benbella, 2009) and The Brain from 25000 Feet (Kluwer, 2003).

[A related story in ScienceDaily: cities shaped like brains]

Read Full Post »