Feeds:
Posts
Comments

Posts Tagged ‘Brain’

I’ve argued there’s no imminent singularity, and I’ve thrown water on the idea that the web will become smart or self-aware. But am I just a wet blanket, or do I have a positive vision of our human future?

I have just written up a short “manifesto” of sorts about where we humans are headed, and it appeared in Seed Magazine. It serves not only as guidepost to our long-term future, but also one for how to create better technologies for our brains (part of the aim of the research institute, 2ai, I co-direct with colleague Tim Barber).

~~~

Mark Changizi is Director of Human Cognition at 2AI, and the author of The Vision Revolution (Benbella Books, 2009) and the upcoming book Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella Books, 2011).

Read Full Post »

Jeremy Hsu recently interviewed me for a piece in LiveScience (picked up at MSNBC and Yahoo) about brain evolution and its relationship to the prospects for AI.  His piece was in reaction to a Brain, Behavior and Evolution piece I wrote, and also to the following featured SB piece I wrote…

===

There are currently two ambitious projects straddling artificial intelligence and neuroscience, each with the aim of building big brains that work. One is The Blue Brain Project, and it describes its aim in the following one-liner:

“The Blue Brain Project is the first comprehensive attempt to reverse-engineer the mammalian brain, in order to understand brain function and dysfunction through detailed simulations.”

The second is a multi-institution IBM-centered project called SyNAPSE, a press release which describes it as follows:

“In an unprecedented undertaking, IBM Research and five leading universities are partnering to create computing systems that are expected to simulate and emulate the brain’s abilities for sensation, perception, action, interaction and cognition while rivaling its low power consumption and compact size.”

Oh, is that all!

The difficulties ahead of these groups are staggering, as they (surely) realize. But rather than discussing the many roadblocks likely to derail them, I want to focus on one way in which they are perhaps making things too difficult for themselves.

In particular, each aims to build a BIG brain, and I want to suggest here that perhaps they can get the intelligence they’re looking for without the BIG.

Why not go big? Because bigger brains are a pain in the neck, and not just for the necks that hold them up. As brains enlarge across species, they must modify their organization in radical ways in order to maintain their required interconnectedness. Convolutedness increases, number of cortical areas increases, number of synapses per neuron increases, white-to-gray matter ratio rises, and many other changes occur in order to accommodate the larger size. Building a bigger brain is an engineering nightmare, a nightmare you can see in the ridiculously complicated appearance of the dolphin brain relative to that of the shrew brain below – the complexity you see in that dolphin brain is due almost entirely to the “scaling backbends” it must do to connect itself up in an efficient manner despite its large size. (See http://www.changizi.com/changizi_lab.html#neocortex )

dolphin brain size  versus shrew brain

If the only way to get smarter brains was to build bigger brains, then these AI projects would have no choice but to embark upon a pain-in-the-neck mission. But bigger brains are not the only way to get smarter brains. Although for any fixed technology, bigger computers are typically smarter, this is not the case for brains. The best predictor of a mammal’s intelligence tends not to be its brain size, but its relative brain size. In particular, the best predictor of intelligence tends to be something called the encephalization quotient (a variant of a brain-body ratio), which quantifies how big the brain is once one has corrected for the size of the body in which it sits. The reason brain size is not a good predictor of intelligence is that the principal driver of brain size is body size, not intelligence at all. And we don’t know why. (See my ScientificBlogging piece on brain size, Why Doesn’t Size Matter…for The Brain?)

This opens up an alternative route to making an animal smarter. If it is brain-body ratio that best correlates with intelligence, then there are two polar opposite ways to increase this ratio. The first is to raise the numerator, i.e., to increase brain size while holding body size fixed, as the vertical arrow indicates in the figure below. That’s essentially what the Blue Brain and SyNAPSE projects are implicitly trying to do.

But there is a second way to increase intelligence: one can raise the brain-body ratio by lowering the denominator, i.e., by decreasing the size of the body, as shown by the horizontal arrow in the figure below. (In each case, the arrow shifts to a point that is at a greater vertical distance from the best-fit line below it, indicating its raised brain-body ratio.)

brain weight best fit  line primates

Rather than making a bigger brain, we can give the animal a smaller body! Either way, brain-body ratio rises, as potentially does the intelligence that the brain-body combo can support.

We’re not in a position today to understand the specific mechanisms that differ in the brains of varying size due to body size, so we cannot simply shrink the body and get a smarter beast. But, then again, we also don’t understand the specific mechanisms that differ in the brains of varying size! Building smarter via building larger brains is just as much a mystery as the prescription I am suggesting: to build smarter via building smaller bodies. And mine has the advantage that it avoids the engineering scaling nightmare for large brains.

For AI to actually somehow take this advice, though, they have to answer the following question: What counts as a body for these AI brains in the first place? Only after one becomes clear on what their bodies actually are (i.e., what size body the brains are being designed to support) can one begin to ask how to get by with less of it, and hope to eke out greater intelligence with less brain.

Perhaps this is the ace in the AI hole: perhaps AI researchers have greater freedom to shrink body size in ways nature could not, and thereby grow greater intelligence. Perhaps the AI devices that someday take over and enslave us will have mouse brains with fly bodies. I sure hope I’m there to see that.

This first appeared on March 9, 2010, as a feature at ScientificBlogging.com.

=============

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).

Read Full Post »

I was recently interviewed about brain and city evolution by “Gladelic: A Quarterly Magazine of Intuitive Intelligence”. Here’s the beginning…

Mark, why have you chosen to focus your study on the neocortex (its importance in the aspect of human evolution) how did you come to comparing cities and brains?

Despite the brain’s complexity, our gray matter is essentially a surface, albeit convoluted in our case. Bigger brains expand the surface area of the gray matter, with only a meager increase in the thickness of gray matter. Cities, too, are surfaces, because they lie on the Earth. Our cortex has ample white matter wiring that “leaps” out of the gray matter to faraway parts of the brain, and these long-range connections are crucial to keeping the entire cortex closely connected. For cities, highways are the white matter axons, leaving the surface streets to efficiently connect to faraway spots. I began to follow these leads, and to flesh out further analogies: synapses and highway exits; axon wire thickness and number of highway lanes; axon propagation velocity and average across-city transit speed.

I found similar scaling laws governing how network properties increase as surface area increases. For example, in each case, the number of conduits (highways and white matter axons) increases as surface area to approximately the 3/4 power, and the total number of “leaves” (exits and synapses) increases as surface area to approximately the 9/8 power.

Despite the radically different kind of network, they are in some respects similar enough, and each has been under selection pressure over time to become more efficiently wired. The selection pressure for brains was, of course, natural selection, which involved lots and lots of being eaten. And the selection pressure for cities was teems of political decisions over many decades to steer a city to work better and better as it grew.

The rest of the interview is at Gladelic (half way down). More about my city-brain research can be found here: https://changizi.wordpress.com/category/cities-shaped-like-brains/

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).

Read Full Post »

Benchfly’s Alan Marnett hit me with an in-depth interview Dec 16, 2009. In addition to getting into the science, the nice thing about the interview was the opportunity to talk about different ways of being a scientist. As you’ll see, I suggest being an aloof son-of-a-bitch, something I also talk about in this piece titled “How Not to Get Absorbed in Someone Else’s Abdomen“.

—————————————

As research scientists, many of us spend a very large amount of time working on a very small subject.  In fact, it’s not unusual for a biochemist to go through their entire career without ever physically observing the protein or pathway they work on.  As we hyper-focus on our own niche of science, we run the risk of forgetting to take the blinders off to see where our slice of work fits in to the rest of the pie.

Changizi

For Dr. Mark Changizi, assistant professor and author of The Vision Revolution, science starts with the pie.  We spoke with Dr. Changizi about why losing focus on the big picture can hurt our research, how autistic savants show us the real capacity of the brain and what humans will look like a million years from now.

BenchFly: Your book presents theories on questions ranging from why our eyes face forward to why we see in color.  Big questions.  As a kid, was it your attraction to the big questions that drew you into science?

Mark Changizi: I sometimes distinguish between two motivations for going into science. First there’s the “radio kid,” the one who takes apart the radio, is always fascinated with how things work, and is especially interested in “getting in there” and manipulating the world. And then there’s the “Carl Sagan kid,” the one motivated by the romantic what-does-it-all-mean questions. The beauty of Sagan’s Cosmos series is that he packaged science in such a way that it fills the more “religious” parts of one’s brain. You tap into that in a kid’s mind, and you can motivate them in a much more robust way than you can from a here’s-how-things-work motivation. I’m a Carl Sagan kid, and was specifically further spurred on by Sagan’s Cosmos. As long as I can remember, my stated goal in life has been to “answer the questions to the universe.”

While that aim has stayed constant, my views on what counts as “the questions to the universe” have changed. As a kid, cosmology and particle physics were where I thought the biggest questions lied. But later I reasoned that there were even more fundamental questions; even if physics were different than what we have in our universe, math would be the same. In particular, I became fascinated with mathematical logic and the undecidability results, the area of my dissertation. With those results, one can often make interesting claims about the ultimate limits on thinking machines. But it is not just math that is more fundamental than physics – that math is more fundamental than physics is obvious. In a universe without our physics, the emergent principles governing complex organisms and evolving systems may still be the same as those found in our universe. Even economic and political principles, in this light, may be deeper than physics: five-dimensional aliens floating in goo in a universe with quite different physics may still have limited resources, and may end up with the same economic and political principles we fuss over.

So perhaps that goes some way to explaining my research interests.

Tell us a little about both the scientific and thought processes when tackling questions that are very difficult to actually prove beyond a shadow of a doubt.

This is science we’re talking about, of course, not math, so nothing in science is proven in the strong mathematical sense. It is all about data supporting one’s hypothesis, and all about the parsimonious nature of the hypothesis.  Parsimony aims for explaining the greatest range of data with the simplest amount of theory. That’s what I aim for.

But it can, indeed, be difficult to find data for the kinds of questions I am interested in, because they often make predictions about a large swathe of data nobody has. That’s why I typically have to generate 50 to 100 ideas in my research notes before I find one that’s not only a good idea, but one for which I can find data to test it. You can’t go around writing papers without new data to test it. If you want to be a theorist, then not only can you not afford to spend the time to become an experimentalist to test your question, but most of your questions may not be testable by any set of experiments you could hope to do in a reasonable period of time. Often it requires pooling together data from across an entire literature.

In basic research we are often hyper-focused on the details.  To understand a complex problem, we start very simple and then assume we will eventually be able to assemble the disparate parts into a single, clear picture.  In essence, you think about problems in the opposite direction- asking the big questions up front.  Describe the philosophical difference between the two approaches, as well as their relationship in the process of discovery.

A lot of people believe that by going straight to the parts – to the mechanism – they can eventually come to understand the organism. The problem is that the mechanisms in biology were selected to do stuff, to carry out certain functions. The mechanisms can only be understood as mechanisms that implement certain functions. That’s what it means to understand a mechanism: one must say how the physical material manages to carry out a certain set of functional capabilities.

And that means one must get into the business of building and testing hypotheses about what the mechanism is for. Why did that mechanism evolve in the first place? There is a certain “reductive” strain within the biological and brain sciences that believes that science has no role for getting into questions of “why”. That’s “just so story” stuff.  Although there’s plenty of just-so-stories – i.e., bad science – in the study of the design and function of biological structure, it by no means needs to be. It can be good science, just like any other area of science. One just needs to make testable hypotheses, and then go test it. And it is not appreciated how often reductive types themselves are in the business of just-so-stories; e.g., computational simulators are concerned just with the mechanisms and often eschew worrying about the functional level, but then allow themselves a dozen or more free parameters in their simulation to fit the data.

So, you have got to attack the functional level in order to understand organisms, and you really need to do that before, or at least in parallel with, the study of the mechanisms.

But in order to understand the functional level, one must go beyond the organism itself, to the environment in which the animal evolved. One needs to devise and test hypotheses about what the biological structure was selected for, and must often refer to the world. One can’t just stay inside the meat to understand the meat.

Looking just at the mechanisms is not only not sufficient, but will tend to lead to futility. An organism’s mechanisms were selected to function only when the “inputs” were the natural ones the organism would have encountered. But when you present a mechanism with an utterly unnatural input, the meat doesn’t output, “Sorry, that’s not an ecologically appropriate input.” (In fact, there are results in theoretical computer science saying that it wouldn’t be generally possible to have a mechanism capable of having such a response.) Instead, the mechanism does something. If you’re studying the mechanism without an appreciation for what it’s for, you’ll have teems and teems of mechanistic reactions that are irrelevant to what it is designed for, but you won’t know it.

The example I often use is the stapler. Drop a stapler into a primitive tribe, and imagine what they do to it. Having no idea what it’s for, they manage to push and pull its mechanisms in all sorts of irrelevant ways. They might spend years, say, carefully studying the mechanisms underlying why it falls as it does when dropped from a tree, or how it functions as crude numchucks. There are literally infinitely many aspects of the stapler mechanism that could be experimented upon, but only a small fraction are relevant to the stapler’s function, which is to fasten paper together.

In explaining why we see in color, you suggest that it allows us to detect the subtleties of complex emotions expressed by humans – such as blushing.  Does this mean colorblind men actually have a legitimate excuse for not understanding women?!

…..to see my answer, and the rest of the interview, go to Benchfly.

Read Full Post »

This first appeared on November 16, 2009, as a feature at ScientificBlogging.com

No one draws pictures of heads with little gears or hydraulics inside any more. The modern conceptualization of the brain is firmly computational. The brain may be wet, squooshy, and easy to serve with an ice cream scooper, but it is nevertheless a computer.

However, there is a rather blaring difficulty with this view, and it is encapsulated in the following question: If our brains are computers, why doesn’t size matter? In the real world of computers, bigger tends to mean smarter. But this is not the case for animals: bigger brains are not generally smarter. Most of the brain size differences across mammals seem to make no behavioral difference at all to the animal.

Instead, the greatest driver of brain size is not how smart the animal is, but how big the animal is. Brain size doesn’t much matter – instead, it is body size that matters. That is not what one would expect of a computer in the head. Brain scientists have long known this. For example, take a look at the plot below showing how brain mass varies with body mass. You can see how tightly correlated they are. If one didn’t know that the brain was the thinking organ and consequently lobbed it into the same pile as the liver, heart and spleen (FYI, I keep my pile of organs in the crawl space), then one would not find it unusual that it increases so much with body size. Organs do that.

But the brain is supposed to be a computer of some strange kind. And yet it is acting just like a lowly organ. It gets bigger merely because the animal’s body is bigger, even though the animal may be no smarter. The plot below, from a 2007 article of mine (in Kaas JH (ed.) Evolution of Nervous Systems. Oxford, Elsevier) shows how behavioral complexity varies with brain mass. There is no correlation. Bigger and bigger brains, and seemingly doing nothing for the animal!

It has long been clear to neuroscientists that what does correlate nicely with animal intellgence is how high above the best-fit line a point is in the brain-versus-body plot we saw earlier. This is called the encephalization quotient, or EQ. It is simply a measure of how big the brain is once one has controlled for body size. EQ matches our intuitive ranking of mammalian intelligence, and in a 2003 paper (in the Journal of Theoretical Biology) I showed that it also matches quantitative measures of their intelligence (namely, the number of items in ethograms as measured by ethologists). The plot is shown below, where you can see that the number of behaviors in each of the mammalian orders rises strongly with EQ.

But although this is well known by neurobiologists, there is still no accepted answer to why brains get bigger with body size. Why should a cow have a brain 200 times larger than a roughly equally smart rat, or 10 times larger than a clearly-smarter house cat? One of my older research areas, in fact, aimed to explain why brains change in the ways they do as they grow in size from mouse to whale (http://www.changizi.com/changizi_lab.html#neocortex), and yet, embarrassingly, I have no idea why these brains are increasing with body size at all. If a dull-witted cow could just stick a tiny rat brain into its head and get all the behavioral complexity it needs, then brains would come in just one size, and I would have had no research to work on concerning the manner in which brains scale up in size.

So, here’s a plan. I would like to hear your hypotheses for why brains increase so quickly with body mass (namely as the 3/4 power). I will let you know if the idea is new, and I will see if I can give your idea a good thrashing. What’s at stake here is our very framework for conceptualizing what the brain is. Perhaps you can say why it is a computer, and that greater body size brings in certain subtle computational demands that explain why brain volume should increase as it does with body mass. Or, more exciting, perhaps you can propose an altogether novel framework for thinking about the brain, one that makes the enigmatic “size matters” issue totally obvious.

To the comments!…

This is where the fun of the piece begins, because at ScientificBlogging.com there were more than 70 comments, all quite productive (no trolls). So, go here and scroll down to the comments.  …and leave one!

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).

o one draws pictures of heads with little gears or hydraulics inside any more. The modern conceptualization of the brain is firmly computational. The brain may be wet, squooshy, and easy to serve with an ice cream scooper, but it is nevertheless a computer.
However, there is a rather blaring difficulty with this view, and it is encapsulated in the following question: If our brains are computers, why doesn’t size matter? In the real world of computers, bigger tends to mean smarter. But this is not the case for animals: bigger brains are not generally smarter. Most of the brain size differences across mammals seem to make no behavioral difference at all to the animal.

Instead, the greatest driver of brain size is not how smart the animal is, but how big the animal is. Brain size doesn’t much matter – instead, it is body size that matters. That is not what one would expect of a computer in the head. Brain scientists have long known this. For example, take a look at the plot below showing how brain mass varies with body mass. You can see how tightly correlated they are. If one didn’t know that the brain was the thinking organ and consequently lobbed it into the same pile as the liver, heart and spleen (FYI, I keep my pile of organs in the crawl space), then one would not find it unusual that it increases so much with body size. Organs do that.

But the brain is supposed to be a computer of some strange kind. And yet it is acting just like a lowly organ. It gets bigger merely because the animal’s body is bigger, even though the animal may be no smarter. The plot below, from a 2007 article of mine (in Kaas JH (ed.) Evolution of Nervous Systems. Oxford, Elsevier) shows how behavioral complexity varies with brain mass. There is no correlation. Bigger and bigger brains, and seemingly doing nothing for the animal!

It has long been clear to neuroscientists that what does correlate nicely with animal intellgence is how high above the best-fit line a point is in the brain-versus-body plot we saw earlier. This is called the encephalization quotient, or EQ. It is simply a measure of how big the brain is once one has controlled for body size. EQ matches our intuitive ranking of mammalian intelligence, and in a 2003 paper (in the Journal of Theoretical Biology) I showed that it also matches quantitative measures of their intelligence (namely, the number of items in ethograms as measured by ethologists). The plot is shown below, where you can see that the number of behaviors in each of the mammalian orders rises strongly with EQ.

But although this is well known by neurobiologists, there is still no accepted answer to why brains get bigger with body size. Why should a cow have a brain 200 times larger than a roughly equally smart rat, or 10 times larger than a clearly-smarter house cat? One of my older research areas, in fact, aimed to explain why brains change in the ways they do as they grow in size from mouse to whale (http://www.changizi.com/changizi_lab.html#neocortex), and yet, embarrassingly, I have no idea why these brains are increasing with body size at all. If a dull-witted cow could just stick a tiny rat brain into its head and get all the behavioral complexity it needs, then brains would come in just one size, and I would have had no research to work on concerning the manner in which brains scale up in size.

So, here’s a plan. I would like to hear your hypotheses for why brains increase so quickly with body mass (namely as the 3/4 power). I will let you know if the idea is new, and I will see if I can give your idea a good thrashing. What’s at stake here is our very framework for conceptualizing what the brain is. Perhaps you can say why it is a computer, and that greater body size brings in certain subtle computational demands that explain why brain volume should increase as it does with body mass. Or, more exciting, perhaps you can propose an altogether novel framework for thinking about the brain, one that makes the enigmatic “size matters” issue totally obvious.

To the comments!…

Comments

It seems to me that total brain mass vs. Body size doesn’t account for different parts of the brain.
Intellegence seems to me to be more related to the percentage of brain mass dedicated to the the Frontal cortex vs the total brain mass. Larger Animals may have need for more brain mass to process more nerve receptors in the larger amount of skin for example, or dedicated to processing Smell. But the part of the brain dedicated to higher level functions may be smaller by some measure (either total mass, or percentage of the rest of the brain mass, etc.)

Mark Changizi's picture

Hi Chuck

“to process more nerve receptors in the larger amount of skin”
Nice. That’s one common hypothesis. And not only more skin and thus more sensory receptors, but more musculature, and so on. But *that* would seem to imply that bigger mammals should have disproportionately larger somatosensory and motor areas, but they don’t.

“dedicated to processing Smell”
But why should larger animals need bigger olfactory neural tissue?

The motor processing functions of an animals brain may be proportionate, but the brain as a whole has to take the total motor processing input and output into account; when you are large and have a complex environment to deal with, you need a concordantly larger brain to deal with it.

Read Full Post »

I was on the Lionel Show / Air America this morning, which was a blast!  Got to talk about my recent book, and about evolution, autistic savants, intelligent design, color, forward-facing eyes, illusions, and more. I really must get off the elliptical machine next time I do a radio show. Here’s the segment with me (or mp3 on your computer).

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).

Read Full Post »

KirstenSanford

Kirsten Sanford

Kirsten Sanford (shown here) and co-host Justin Jackson (sorry Justin, you understand) of This Week in Science interviewed me last week about my research and my recent book, The Vision Revolution.

Here’s the interview, and I don’t start jibber-jabbering until about 33 minutes in.  Notice how they sucker-punch me right in the belly button. (That’s what they mean by “the kickass science podcast”.)

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).

Read Full Post »

Originally a piece in ScientificBlogging, October 2, 2009…

Dear Hugh Hefner:

Ever wondered  why you’re rich?   Yes, yes, you’re a savvy businessman who succeeded where thousands have failed.   But there are deeper reasons underlying why your business model works at all. When one digs deeply enough one finds that color – yup, the stuff of rainbows and Crayola – is at the core of your success. Without hue, there’d be no Hugh.

To see why you should be giving thanks to the existence of color, let’s start with something closer to your home; nakedness.   Although mammals tend to be furry-faced, some of us primates had the chutzpah to lose the hair on our faces, and often on our rumps. And we humans are nearly naked all over, something you may have noticed.   If we humans weren’t so bare, we would probably not wear robes. And then there would be no reason to disrobe.

If there were no bare skin, there would  be no Hefner as we know it.

Now  let’s delve deeper and ask why some of us primates got bare in the first place. One feature that distinguishes the primates with bare faces from the furry-faced ones is color vision. The naked primates can see in color, but the furry-faced ones cannot.   Color goes with nudity. Why?

As I have argued in my research, our color vision is a distinctive kind of color vision, one that is specialized for detecting the color changes that happen in skin due to the physiological changes in blood (e.g., oxygenation). Most varieties of color vision – like that in birds, reptiles and bees – do not have this extraordinary capability. Our color vision is for seeing blushes, blanches, red rage, sexual engorgement and the many other skin color changes that occur as one’s emotion, mood, or physiology alters. Color is for seeing embarrassment, fear, anger, sexual excitement, and so on.

Our primate ancestors once had furry faces, and one was born with our style of color vision, able to detect the peculiar changes in our underlying blood physiology. Although the faces this ancestor looked at were  furry, some skin would have been visible, such as around the eyes, nostrils, lips and any lighter patches of fur. This ancestor would have been born an “empath,” able to see the moods of others. Color vision of this kind would thus spread over time.

And once it spread, animals could then have evolved to “purposely” signal colors indicating their mood, and then bare skin would have evolved to have more canvas for signaling. Many of our skin color changes are indeed “purposeful,” i.e., not simply inevitable consequences of our underlying physiological state. For example, Peter D. Drummond has shown  that peoples’ faces blush more on the side which people can see.

You might be wondering  why, unlike the other primates who mainly have bare faces and rumps, we humans are so naked all over.  It might be that, although we don’t consciously notice it, we color signal over our entire canvas.  If all our bare spots are for color signalling (setting aside the palms and the bottoms of the feet) then we should not be naked in places that viewers would not tend to be able to see.

Well, there are three places on the body that are difficult to observe; the top of the head, the underarms and the groin. And notice that, as expected if bare skin is for color signaling, these three spots are the universally furry spots on humans.

The only complication here is that the groin does occasionally become dominated by bare skin rather than fur, namely when  the genitalia engorge. But at these times there is often another person involved in a behavior wherein the groin is, ahem, no longer difficult to see.

Bare skin really may be for looking at! And it is worth  looking at because it often signals something to the viewer. But the viewer can only see these signals if they have our special kind of color vision.

No color vision, no nakedness. No nakedness, no Hugh Hefner.

Or, no hue, no Hugh.

And now the real point of my writing: Because of the dependency of your enterprise on the evolution of color, it would only be natural to bring some diversity to those apocryphal parties at the mansion … by inviting an evolutionary neuroscientist.

Just have your people call my person.

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).

Read Full Post »

Originally a piece in ScientificBlogging, September 28, 2009…

You open your dictionary to figure out what your friend meant by ‘nasute,’ only to find that the definition is “A wittol, or jemadar; bannocked in an emunctory fashion.” What good is this dictionary, you wonder, if it only refers me to other words I don’t know? And worse still, the definitions of some of these words refer back to ‘nasute,’ the word you didn’t know in the first place! Even if your attempt to learn what ‘nasute’ means is not infected by circularity, you face a quick explosion of words to look up: the words in the definition, the words in each of these definitions, and so on. The dictionary appears, then, to be a terribly messy tangled web.

In reality, however, dictionaries aren’t quite that worthless. …and the definition of ‘nasute’ above is, thankfully, fiction. The standard line for why dictionaries are useful is that the typical users of dictionaries already know the meanings of much of the dictionary, and so a disorderly dictionary definition doesn’t send them on an exploding wild goose chase.

Dictionaries would, however, be only a source of frustration for a person not knowing any of the vocabulary. And, therefore, dictionaries – and the lexicon of language they attempt to record – can’t be our mental lexicon. If a word is in our mental lexicon, then we know what it means. And if we know what it means, then our brain is able to unpack its meaning in terms it understands. The brain is not sent on a wild goose chase like the one fated for a Zulu native handed the Oxford English Dictionary.

Compared to the disheveled dictionary, the mental lexicon is much more carefully designed. The mental lexicon is hierarchical, having at its foundation a small number – perhaps around 50 – of “semantic primes” (or fundamental atoms of meaning) that are combined to mentally define all our mental concepts, something the linguist Anna Wierzbicka has argued. And our internal lexicon has a number of hierarchical levels, analogous to the multiple levels in the visual hierarchy or auditory hierarchy.

The “visual meaning” of a complex concept – e.g., the look of a cow – gets built out of a large combination of fundamental visual atoms, e.g., oriented contours and colored patches. In the same way, the (semantic) meaning of the concept of a cow gets built out of a large combination of fundamental semantic atoms, e.g., words like ‘you’, ‘body’, ‘some’, ‘good’, ‘want’, ‘now’, ‘here’, ‘above’, ‘maybe’, and ‘more’. In both sensory and semantic hierarchies, the small number of bottom level primes are combined to build a larger set of more complex things, and these, in turn, are used to build more complex ones, and so on. For vision, sample objects at increasingly more complex levels include contours, junctions, junction-combinations, and objects. For the lexicon, examples of increasing complexity are ‘object’, ‘living thing’, ‘animal’, ‘vertebrate’, ‘mammal’, ‘artiodactyl’, and ‘cow’.

In our natural state, the mental lexicon we end up with would depend upon our experiences. No rabbits in your locale, no lexical entry in the head for rabbits. And the same is true for vision. The neural hierarchy was designed to be somewhat flexible, but designed to do its lexical work hierarchically, in an efficient fashion, no matter the specific lexicon that would fill it.

There is quite a difference, then, between the disordered, knotted dictionary and our orderly, heavily optimized, hierarchical mental lexicon. Language’s vocabulary determined by cultural selection – and whose structure is partially measured by dictionaries – does not seem to have harnessed the lexical expectations of our brain.

However, are dictionaries really so tangled? Back in 2005 while working at Caltech, I began to wonder. Dictionaries surely do have some messiness, because they’re built by real people from real messy data about the use of words: so some circularities may occasionally get thrown in by accident. But my bet was that the signature, efficient hierarchical structure of our inner lexicon should be in the dictionary, if only we looked carefully for it. Language would work best if the public vocabulary were organized in such a way that it would naturally fit the shape of our lexical brain, and I suspected cultural selection over time should have figured this out. …that it should have given us a dictionary shaped like the brain: a braintionary.

So I set out on a search for these signature hierarchical structures in the dictionary. A search to find the hidden brain in the dictionary. In particular, I asked whether the dictionary is hierarchically organized in such a way that it minimizes the total size of needed to define everything it must.

To grasp the problem, a starting point is to realize that there is more than one way to build a hierarchical dictionary. One could use the most fundamental words to define all the other words in the dictionary, so that there would be just two hierarchical levels: the small set of fundamental (or atomic) words, and the set of everything else. Alternatively, dictionaries could use the most fundamental words to define an intermediate level of words, and in turn use these words to define the rest. That would make three levels, and, clearly, greater numbers of levels are possible.

My main theoretical observation was that having just the right number of hierarchical levels can greatly reduce the overall size of the dictionary. A dictionary with just two hierarchical levels, for example, would have to be more than three times larger than an optimal one that uses around seven levels.

Via measurements from and analysis of WordNet and the Oxford English Dictionary, in a paper I published in the Journal of Cognitive Systems Research I provided evidence that actual dictionaries have approximately the optimal number of hierarchical levels. I discovered that dictionaries do have the structure expected for the brain’s lexical hierarchy. Dictionaries are braintionaries, designed by culture to have the structure our brains like, maximizing our vocabulary capabilities for our fixed ape brains.

What it means is that language has culturally evolved over the centuries and millenia not only to have the words we need, but also to have an overall organization—in terms of how words get their meanings from other words—that helps minimize the overall size of the dictionary. …and simultaneously helps us efficiently encode the lexicon in our heads.

The journal article itself can be linked here.

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).

Read Full Post »

Other people have an accent, but not me. And this is not just because I have no accent. I wouldn’t have an accent even if I had one!

Accent is a strange thing (as is my reasoning style). No matter the accent you get stuck with – southern, New Yorker, or my valley girl rendition – you feel as if it is the other accents that sound accented to you. Your own accent sounds, well, unaccented, like vanilla, corn flakes, or white bread. Arguments about which person “has an accent” don’t tend to be productive; just a lot of pointing and reiterating the pearl, “No, you’re the one with the accent.”

And it is not just accent where we find ourselves behaving badly. We do the same for skin color. Most people feel that their own skin color is fairly uncolorful, and difficult to accurately name. Why are our perceptual systems like this? Here’s what I said about this in The Vision Revolution.

    “Why would we evolve to perceive our own skin color as uncategorizable and uncolored? How could this be a useful thing? Consider an object with a color that is highly categorizable—say an orange. If I place 100 oranges in front of you, there will actually be some variation in their colors, but you won’t pay much attention to these differences. You will subconsciously lump together all the different hues into the same category: “orange.” Ignoring differences is a central feature of categorization. To categorize is to stereotype. When a color is uncategorizable, however, the opposite of stereotyping occurs. Rather than lumping together all the different colors, you appreciate all the little differences. Because our skin color cannot be categorized, we are better able to see very minor deviations in skin color, and therefore register minor changes in others’ skin color as they occur.”

Unfortunately, this fine discrimination around one’s own skin color (or accent, or the taste of your own saliva, for that matter) has an unintended consequence: it can lead to racism.

Race and skin color.

Could racism really be a side effect of highly efficient perceptual mechanisms? I’m afraid so. Here’s an excerpt from The Vision Revolution where I discuss why…

    If our skin color is so uncolored, why do we use color terms so often to refer to race? Races may not literally be white, black, brown, red or yellow, but people do perceive other races to be colored in the general direction of these fundamental colors, which is why color terms are used at all. So, what is all this nonsense about uncolored skin?
    To answer this, one must remember that it is only one’s own skin that appears uncolored. I perceive my saliva as tasteless, but I might taste a sample of some of yours. I don’t smell my nose, but I might be able to smell yours. Similarly, my own skin may appear uncolored to me, but a consequence of being designed to perceive the changes around baseline is that even fairly small deviations from baseline are perceived as qualitatively colored, just as a 100 degree temperature is perceived as hot. An alien coming to visit us would find it utterly perplexing that a white person perceives a black person’s skin to be so different from his own, and vice versa. Their spectra are practically identical (see Figure 3). But then again, this alien would be surprised to learn that you perceive 100 degree skin as hot, even though 98.6 degrees and 100 degrees are practically the same.
    Therefore, the fact that languages tend to use color terms to refer to other races is not at all mysterious. It is consistent with what would be expected if our color vision is designed for seeing color changes around baseline skin color. Whereas your baseline skin color is uncategorizable and appears uncolored, skin colors deviating even a little from baseline appear categorizably colorey.
    Skin color is probably a lot like accents. Rather than asking about the color of your skin, let’s now ask, What is the accent of your own voice? The answer is that you perceive it to have no accent. But you perceive people coming from other regions or countries to have an accent. Of course, they believe that you are the one with the accent, not them. This is because we are designed to ably discriminate the voices of people in our lives who have the same accent (or non-accent) as ourselves. We need to discriminate between different people’s voices, and we also need to discriminate the inflections in the voice of a single individual. A consequence of this is that our own voice and those typical of our community are perceived as non-accented, and even fairly small deviations away from this baseline accent are perceived as categorizably accented (e.g., country, urban, Boston, New York, English, Irish, German and Latino accents). Because of this, people find it difficult to recognize people by voice when they have an accent. People also find it more difficult to discriminate the tone or emotional inflections of the speaker when the speaker has an accent.
    In talking about your perception of your own skin color earlier, for simplicity I was implicitly assuming that the community you have grown up around shares approximately the same skin color. For most of our evolutionary history this was certainly the case. And even today most people are raised and live among individuals largely sharing their own skin color, but by no means always. If you are an ethnic minority in your community, your skin color may differ from the average skin color around you, and your baseline skin color may well end up to be different from your own. If this were the case, then you may in principle perceive your own skin to be colored. For example, if you are of African descent but living in the U.S., then because the baseline skin color of the U.S. leans toward that of Caucasians, you may perceive your own skin to be color-ey. Similarly, if someone with a Southern accent moves to New York City, he may begin to notice his own accent because the baseline accent of his community has changed (but his accent may not much change).

    One implication of all this is that our perception of the skin color of various races is illusory, and these illusions are potentially one factor underlying racism. In fact, it leads to at least three distinct (but related) illusions of racial skin color. To understand these three illusions, it is helpful to consider these illusions in the context of perceived temperature.

    First, as noted earlier, we perceive 98.6 degrees to be neither warm nor cold, yet we perceive 100 degrees as hot. That is, we perceive one temperature to have no perceptual quality of warmth/cold, whereas we perceive the other temperature to categorically possess a temperature (namely hot). This is an illusion because there is nothing in the physics of temperature that underlies this perceived qualitative difference between these two temperatures. For skin there is an analogous illusion, namely the perception we have that one’s own skin is uncolorful but that the skin of other races is colored. This is an illusion because there is no objective sense in which your skin is uncolorful but that of others is colorful. (Similarly, there is no objective truth underlying the perception that one’s own voice is not accented but that foreign voices are.)

    A second consequent illusion is illustrated by the fact that we perceive 98.6 degrees as very different from 100 degrees, even though they are objectively not very different. This is closely related to the first illusion, but differs because whereas the first concerns the absence versus the presence of a perceived categorical quality, this illusion concerns the perceived difference in the two cases. The analogous illusion for skin is the perception that your own skin is very different from that of some other races. This is an illusion because the spectra underlying skin colors of different races are actually very similar.

    And third, we perceive 102 degrees and 104 degrees as very similar in temperature, despite their objective difference being greater than the difference between 98.6 degrees and 100 degrees, the latter which we perceive as very different. For skin colors, we lump together the skin colors of some other races as similar to one another, even though in some cases their colors may differ as much as your own color does from either of them. For example, while people of African descent distinguish between many varieties of African skin, Caucasians tend to lump them all together as “black” skin. (And for the perception of voice, many Americans confuse Australian accents with English ones, two accents which are probably just as objectively different as American is to English.)

    As a whole, these illusions lead to the false impression that other races are qualitatively very different from ourselves, and that other races are homogeneous compared to our own. It is, then, no wonder that we humans have a tendency to stereotype other races: we suffer from perceptual illusions that encourage this. But by recognizing that we suffer from these illusions, we can more ably counter them.

How much of the human tendency toward racism is explained by these perceptual mechanisms? I don’t know, but I would not underestimate the power of such illusions, for they fundamentally affect – or color – how we see the world and the people in it.

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute. This research on the evolution of color – and other work of his on the origins or writing, illusions and stereo vision – are the topic of his new book, The Vision Revolution (Benbella Books).

Read Full Post »

Older Posts »