Posts Tagged ‘Harnessed’


My most recent book, Harnessed, has now appeared in Korean translation, with tireless translator Seung Young Noh. For more info about the book, here’s a start: a review by Nobel laureate.


Mark Changizi is Director of Human Cognition at 2AI, a managing director of O2Amp, and the author of HARNESSED: How Language and Music Mimicked Nature and Transformed Ape to Man and THE VISION REVOLUTION. He is finishing up his new book, HUMAN 3.0, a novel about our human future, and working on his next non-fiction book, FORCE OF EMOTION.

Read Full Post »

New Scientist's Top Ten Science Books in 2011, Harnessed is on the right

I’m excited that my new book, Harnessed, is among New Scientist’s top ten science books of 2011, standing aside other authors I admire.

In the book I describe (and present a large battery of new evidence for) my radical new theory for how humans came to have language and music. They’re not instincts (i.e., we didn’t evolve them via natural selection), and they’re not something we merely learn. Instead, speech and music have themselves culturally evolved to fit us (not a new idea) by mimicking fundamental aspects of nature (my idea). Namely speech came to sound like physical events among solid objects, and music came to sound like humans moving and behaving in your midst (that’s why music is evocative). Each of these artifacts thereby came to harness an instinct we apes already possessed, namely auditory object-event recognition and auditory human-movement recognition mechanisms.

The story for how we came to have speech and music is, then, analogous to how we came to have writing, something we know we didn’t evolve. Writing, I’ve argued (in The Vision Revolution), culturally evolved to possess the signature shapes found in nature (and specifically in 3D scenes with opaque objects), and thereby harnessed our visual object-recognition system.

Buy the book here.


Mark Changizi is Director of Human Cognition at 2AI, and the author of
Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man and The Vision Revolution. He is finishing up his new book, HUMAN, a novel about our human future.

Read Full Post »

I’ve argued there’s no imminent singularity, and I’ve thrown water on the idea that the web will become smart or self-aware. But am I just a wet blanket, or do I have a positive vision of our human future?

I have just written up a short “manifesto” of sorts about where we humans are headed, and it appeared in Seed Magazine. It serves not only as guidepost to our long-term future, but also one for how to create better technologies for our brains (part of the aim of the research institute, 2ai, I co-direct with colleague Tim Barber).


Mark Changizi is Director of Human Cognition at 2AI, and the author of The Vision Revolution (Benbella Books, 2009) and the upcoming book Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella Books, 2011).

Read Full Post »

I believe that music sounds like people, moving. Yes, the idea may sound a bit crazy, but it’s an old idea, much discussed in the 20th century, and going all the way back to the Greeks. There are lots of things going for the theory, including that it helps us explain…

(1) why our brains are so good at absorbing music (…because we evolved to possess human-movement-detecting auditory mechanisms),

(2) why music emotionally moves us (…because human movement is often expressive of the mover’s mood or state), and

(3) why music gets us moving (…because we’re a social species prone to social contagion).

And as I describe in detail in my upcoming book — Harnessed: How Language and Music Mimicked Nature and Transformed Ape To Man — music has the signature auditory patterns of human movement (something I hint at in this older piece of mine).

Here I’d like to describe a novel way of thinking about what the meaning of music might be. Rather than dwelling on the sound of music, I’d like to focus on the look of music. In particular, what does our brain think music looks like?

It is natural to assume that the visual information streaming into our eyes determines the visual perceptions we end up with, and that the auditory information entering our ears determines the events we hear.

But the brain is more complicated than this. Visual and auditory information interact in the brain, and the brain utilizes both to guess the single scene to render a perception of. For example, the research of Ladan Shams, Yukiyasu Kamitani and Shinsuke Shimojo at Caltech have shown that we perceive a single flash as a double flash if it is paired with a double beep. And Robert Sekuler and others from Brandeis University have shown that if a sound occurs at the time when two balls pass through each other on screen, the balls are instead perceived to have collided and reversed direction.

These and other results of this kind demonstrate the interconnectedness of visual and auditory information in our brain. Visual ambiguity can be reduced with auditory information, and vice versa. And, generally, both are brought to bear in the brain’s attempt to infer the best guess about what’s out there.

Your brain does not, then, consist of independent visual and auditory systems, with separate troves of visual and auditory “knowledge” about the world. Instead, vision and audition talk to one another, and there are regions of cortex responsible for making vision and audition fit one another.

These regions know about the sounds of looks and the looks of sounds.

Because of this, when your brain hears something but cannot see it, your brain does not just sit by and refrain from guessing what it might have looked like.

When your auditory system makes sense of something, it will have a tendency to activate visual areas, eliciting imagery of its best guess as to the appearance of the stuff making the sound.

For example, the sound of your neighbor’s rustling tree may spring to mind an image of its swaying lanky branches. The whine of your cat heard far way may evoke an image of it stuck up high in that tree. And the pumping of your neighbor’s kid’s BB gun can bring forth an image of the gun being pointed at Foofy way up there.

Your visual system has, then, strong opinions about the proper look of the things it hears.

And, bringing ourselves back to music, we can use the visual system’s strong opinions as a means for gauging music’s meaning.

In particular, we can ask your visual system what it thinks the appropriate visual is for music.

If, for example, the visual system responds to music with images of beating hearts, then it would suggest, to my disbelief, that music mimics the sounds of heartbeats. If, instead, the visual system responds with images of pornography, then it would suggest that music sounds like sex. You get the idea.

But in order to get the visual system to act like an oracle, we need to get it to speak. How are we to know what the visual system thinks music looks like?

One approach is to simply ask which visuals are, in fact, associated with music? For example, when people create imagery of musical notes, what does it look like? One cheap way to look into this is simply to do a Google (or any search engine) image search on the term “musical notes.” You might think such a search would merely return images of simple notes on the page.

However, that is not what one finds. To my surprise, actually, most of the images are like the one in the nearby figure, with notes drawn in such a way that they appear to be moving through space.

Notes in musical notation never actually look anything like this, and real musical notes have no look at all (because they are sounds). And yet we humans seem to be prone to visually depicting notes as moving all about.

music, movement, notes 

Music tends to be depicted as moving.

Could these images of notes in motion be due to a more mundane association?

Music is played by people, and people have to move in order to play their instrument. Could this be the source of the movement-music association? I don’t think so, because the movement suggested in these images of notes doesn’t look like an instrument being played. In fact, it is common to show images of an instrument with the notes beginning their movement through space from the instrument: these notes are on their way somewhere, not an indication of the musician’s key-pressing or back-and-forth movements.

Could it be that the musical notes are depicted as moving through space because sound waves move through space? The difficulty with this hypothesis is that all sound moves through space. All sound would, if this were the case, be visually rendered as moving through space, but that’s not the case. For example, speech is not usually visually rendered as moving through space. Another difficulty is that the musical notes are usually meandering in these images, but sound waves are not meandering — sound waves go straight. A third problem with sound waves underlying the visual metaphor is that we never see sound waves in the first place.

Another possible counter-hypothesis is that the depiction of visual movement in the images of musical notes is because all auditory stimuli are caused by underlying events with movement of some kind. The first difficulty, as was the case for sound waves, is that it is not the case that all sound is visually rendered in motion. The second difficulty is that, while it is true that sounds typically require movement of some kind, it need not be movement of the entire object through space. Moving parts within the object may make the noise, without the object going anywhere. In fact, the three examples I gave earlier — leaves rustling, Foofy whining, and the BB gun pumping — are noises without any bulk movement of the object (the tree, Foofy, and the BB gun, respectively). The musical notes in imagery, on the other hand, really do seem to be moving, in bulk, across space.

Music is like tree-rustling, Foofy, BB guns and human speech in that it is not made via bulk movement through space. And yet music appears to be unique in this tendency to be visually depicted as moving through space.

In addition, not only are musical notes rendered as in motion, musical notes tend to be depected as meandering.

When visually rendered, music looks alive and in motion (often along the ground), just what one might expect if music’s secret is that it sounds like people moving.

A Google Image search on “musical notes” is one means by which one may attempt to discern what the visual system thinks music looks like, but another is to simply ask ourselves what is the most common visual display shown during music. That is, if people were to put videos to music, what would the videos tend to look like?

Lucky for us, people do put videos to music! They’re called music videos, of course. And what do they look like?

The answer is so obvious that it hardly seems worth noting: music videos tend to show people moving about, usually in a time-locked fashion to the music, very often dancing.

As obvious as it is that music videos typically show people moving, we must remember to ask ourselves why music isn’t typically visually associated with something very different. Why aren’t music videos mostly of rivers, avalanches, car races, wind-blown grass, lion hunts, fire, or bouncing balls?

It is because, I am suggesting, our brain thinks that humans moving about is what music should look like…because it thinks that humans moving about is what music sounds like.

Musical notes are rendered as meandering through space. Music videos are built largely from people moving, and in a time-locked manner to the music. That’s beginning to suggest that the visual system is under the impression that music sounds like human movement.

But if that’s really what the visual system thinks, then it should have more opinions than simply that music sounds like movement. It should have opinions about what, more exactly, the movement should look like.

Do our visual systems have opinions this precise? Are we picky about the mover that’s put to music?

You bet we are! That’s choreography. It’s not enough to play a video of the Nutcracker ballet during Beatles music, nor will it suffice to play a video of the Nutcracker to the music of Nutcracker, but with a small time lag between them. The video of human movement has to have all the right moves at the right time to be the right fit for the music.

These strong opinions about what music looks like make perfect sense if music mimics human movement sounds. In real life, when people carry out complex behaviors, their visual movements are tightly choreographed with the sounds – because the sight and sound are due to the same event. When you hear movement, you expect to see that same movement. Music sounds to your brain like human movement, which is why when your brain hears music, it expects that any visual of it should be consistent with it.


This was adapted from Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella Books, 2011). It first appeared July 26, 2010, at Psychology Today.

Mark Changizi is Professor of Human Cognition at 2ai, and author of The Vision Revolution.

Read Full Post »

It is my pleasure to announce that my upcoming book, HARNESSED (Benbella, 2011) can now be pre-ordered at Amazon!

It is about how we came to have language and music. …about how we became modern humans. See https://changizi.wordpress.com/book-harnessed/ for more about the book.


Mark Changizi is Professor of Human Cognition at 2AI, and the author of The Vision Revolution (Benbella Books) and the upcoming book Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella Books).

Read Full Post »

Daniel Lende from PLoS Blogs’ Neuroanthropology recently interviewed me about the relationship between culture, brain and nature, and the origins of language. See the interview here.

In my view, anthropology — and evolution and culture — are crucial to understanding neuroscience and our origins. …and so their “Neuroanthropology” blog (also by Greg Downey) will be one I follow closely.

Read Full Post »

As seen in classified ads…

Have a talent and enjoyment for inflicting prescribed doses of pain? Your dream job awaits. (Biology undergraduate required.) Contact: 555-8428
You are not supposed to be reading this. You’re an ape who never evolved to read, but you can do so because writing culturally evolved to be shaped just right for your illiterate visual system. As I have argued in my research and recent books, culture’s trick for getting writing into us was to harness our ancient visual system for a new purpose (The Vision Revolution), a trick also used for speech and music (upcoming in Harnessed). (Hint: The trick to harnessing is, in each case, to mimic nature.)

This “harnessing” strategy is just the tip of the iceberg – our modern civilization is, in myriad ways, shaped to fit our fundamentally uncivilized selves. Culture has given us clothes that fit our body shapes, color patterns that fit our innate color senses, lexicons that fit our brains, religions that fit our aspirations, and chairs that fit our butts.

But there is one blaring gap in how we have been harnessed for modernity, a gap that, if addressed, would lead to a revolution in safety and well-being for humankind.

What’s missing is pain.

Pain is crucial, of course, because it keeps us safe, and prevents us from engaging in acts that injure or slice off parts of ourselves. Although wishing for a world without pain sounds initially alluring, one quickly realizes that such a world would be hell – it would be a world of the walking bruised and hideously injured (unless you’re into that). Those who lack pain don’t last long. And even if they avoid catching on fire or bleeding to death, they often succumb to death by a thousand pricks (e.g., they don’t shift their body weight as the rest of us do when they sit too long in one position, and this leads over time to circulatory damage).

Pain is designed to be elicited before injury actually occurs, with the hope that it prevents injury altogether. (E.g., see Why Does Light Make Headaches Worse?) Pain is evolutionarily designed to cause us to say, “Ouch!”, rather than, “Darn, I needed that appendage!”

More importantly for our purposes here, pain is rigged to be elicited in scenarios that would have been dangerous for our ancestors out in nature. A great example of what happens to animals who encounter injurious situations they have no pain mechanisms to deter them from is when natural gas accumulates in low spots. One animal gets there and dies. Another animal sees an easy meal, and also dies. Soon there are many dozens of dead animals there, lured to their death, with life-snuffing injuries sneaking up on them without the benefit of warning pain.

And there’s your problem! We no longer live in the nature that shaped our bodies and brains, and the dangerous scenarios we now face aren’t the same as those our ancestors faced. Electricity, ban saws, nail guns, stove tops, toasters perched next to bathtubs, and countless other modern dangers exist today, dangers that we’re not designed to have safety-ensuring pain to protect us from (until it’s too late).

What we need are technologies that inflict “smart pain,” pain not only designed to go off at signs of modern dangers, but designed to be painful in the right way, on the right body part, so as to optimally alert us to the acute danger.

Just to throw out a few examples…

  • Your car rigged to shock you on your left or right side if drive your car within several inches of an obstacle on your car’s left or right, respectively.
  • Your computer set to shine a painfully bright red light if you are about to click on a suspicious link.
  • A wearable device with a video sensor that detects the likelihood that the person you’re picking up at a bar has an STD, and then causes severe itching until you flee the bar.

You’re beginning to get the idea, and I hope you can see that the ideas are endless. What I would like to see are your own suggestions for the future of pain engineering, and to a world where all sadists are employed.

This first appeared on May 6, 2010, as a feature at bodyinmind.au


Mark Changizi is Professor of Human Cognition at 2AI, and the author of The Vision Revolution (Benbella Books) and the upcoming book Harnessed (Benbella Books).

Read Full Post »

There’s a good chance that you’re listening to music while reading this, and if you happen not to be, my bet is that you listen to music in the car, or at home, or while jogging. In all likelihood, you love music – simply love it.

Why?  What is it about those auditory patterns counting as “music” that makes us relish it so?

I have my own opinion about the answer, the topic of my recently finished book that will appear next year, Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man. I’ll give you a hint as to my view at the end of this piece, but what I’d like to do in this piece is to put forth four hurdles I believe any theory of music must leap over.

Brain: Why do we have a brain for music?
Emotion: Why is music emotionally evocative?
Dance: Why do we dance?
Structure: Why is music structurally organized as it is?

If a theory can answer all four questions, then I believe we should start paying attention.

To help clarify what I mean by these questions, let’s run through them in the context of a particular lay theory of music, namely the “heartbeat” theory of music. Although there is probably not just a single heartbeat theory put forth by lay people, the main motivation appears to be that a heart carries a beat, something fundamental to music. Of course, we don’t typically hear our own heartbeat, much less others, so when it is fleshed out I have heard it suggested that it comes from our in-utero days. One of the constants of the good fetus life was Momma’s heartbeat, and music takes us back to the oceanic, one-with-the-universe feelings we long ago lost. I’m not suggesting this is a good theory, by any means, but it will aid me in illustrating the four hurdles.  I would be hesitant, by the way, to call this “lub-dub” theory of music crazy – our understanding of the origins of music is so woeful that any non-spooky theory is worth a look. Let’s see how lub-dubs fare with our four hurdles for a theory of music.

The first hurdle was this: “Why do we have a brain for music?” That is, why are our brains capable of processing music? For example, fax machines are designed to process the auditory modulations occurring in fax machine communication, but to our ears fax machines sound like a fairly continuous screechy-brrr – we don’t have brains capable of processing fax machine sounds. Music may well sound homogeneously screechy-brrrey to non-human ears, but it sounds richly dynamic and structured to our ears. How might the lub-dub theorist answer why we have a brain for music?

Best I can figure, the lub-dubber could say that our in-utero days of warmth and comfort get strongly associated to Momma’s heartbeat, and the musical beat taps into those associations, bringing back warm fetus feelings.

One difficulty for this hypothesis is that learned associations often don’t last forever, so why would those Momma’s-heartbeat associations be so strong among adults? There are lots of beat-like stimuli out of the womb: some are nice, some are not nice. Why wouldn’t those out-of-the-womb sounds become the dominant association, with the Momma’s heartbeat washed away? And if Momma’s lub-dubs are, for some reason, not washed away, then why aren’t there other in-utero experiences that forever stay with us? Why don’t we, say, like to wear artificial umbilical cords, thereby bringing forth recollections of the womb? “Cuddle with your umbilicus just like the old days. You’ll sleep better. Guaranteed!” And why, at any rate, do we think we were so happy in the womb?  Maybe those days, supposing they leave any trace at all, are associated with nothing whatsoever. (Or perhaps with horror.) The lub-dub theory of music does not have a plausible story for why we have a brain ready and excited to soak up a beat.

The lub-dub theory of music origins also comes up short on the second major demand on a theory of music – that it explain why music is evocative, or emotional.  Heartbeat sounds amount to a one-dimensional parameter – faster or slower rate – and are not sufficiently rich to capture much of the range of human emotion.  Accordingly, heartbeats won’t help much in explaining the range of emotions music can elicit in listeners.

Psychophysiologists who look for physiological correlates of emotion take a variety of measurements (e.g., heart rate, blood pressure, skin conductance), not just one. Heart sounds aren’t rich enough to tug at all music’s heart strings.

Heartbeats also fail the “dance” hurdle. The “dance” requirement is that we explain why it is that music should elicit dance. This fundamental fact about music is a strange thing for sounds to do. In fact, it is a strange thing for any stimulus to do, in any modality. For lub-dubs, the difficulty for the dance hurdle is that even if lub-dubs were fondly recalled by us, and even if they managed to elicit a wide range of emotions, we  would have no idea why it should provoke post-uterin people to move, given that even fetuses don’t move to Momma’s heartbeat.

The final requirement of a theory of music is that it explain the structure of music, a tall order. Lub-dubs do have a beat, of course, but heartbeats are far too simple to possibly explain the many other structural regularities found in music. For starters, where is the melody?

Sorry, Mom. Thanks for the good times in your uterus, but I’m afraid your heartbeats are not the source of my fascination with music.

To tip my hand on my upcoming book, my view is that music has been culturally selected over time to sound like human movement, something I have also hinted at in the following pieces…



We have a brain for music because auditory mechanisms for recognizing what people are doing around us are clearly advantageous, and were selected for. Music is evocative because it sounds like human behaviors, many which are expressive in their nature. Music gets us dancing because we social apes are prone to mimic the movements of others. And, finally, the movement theory is sufficiently powerful that it can explain a lot of the structure of music – that requires much of the my book to describe. I admit that my hypothesis sounds implausible, and I ask that you wait to hear the book-length argument for it.

This first appeared on April 6, 2010, as a feature at Science 2.0


Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).

Read Full Post »

A generation ago it was only a brave eclectic minority of psychologists and neuroscientists who dared to address the arts. Things have changed considerably since then. “Art and brain” is now a legitimate and respected target of study, and is approached from a variety of viewpoints, from reductionistic neurophysiology to evolutionary approaches.

Things have changed so quickly that late 20th century conversations about how to create stronger art-science collaborations and connections are dated only a decade later – everyone’s already doing it! And the new generation of students being trained are at home in both the arts and sciences in a way that was rare before.

Although we are all now more culturally comfortable bathing in conversations about art and brain, are we making progress? Has looking into the brain helped us make sense of the arts? Here I will briefly explain why I believe we have made little progress. And then I will propose an alternative route to understanding art and its origins.

Perhaps the most common modus operandi in the cognitive and brain sciences approach to art is (i) to point to some known principle of brain science, and then (ii) to provide examples of art showing conformance with that principle. As fun as it may be to read explanations of art of this kind, the approach suffers from two fundamental difficulties – one on the brain side, one on the arts side.

Let’s start with the “brain” difficulty, which is simply this: we don’t understand the brain. Although the field is jam-packed with fantastically clever experiments giving us fascinating and often valid data, there is usually very little agreement (or ought to be little agreement) about how to distill the data into broad principles. And the broader and higher-level the supposed principle, the more controversial and difficult-to-defend it is. Consequently, most of the supposed principles in the brain sciences remotely rich enough to inform us about the arts are deeply questionable.

If we are so ignorant of the brain, why is the modus operandus above sometimes seemingly able to explain art? There is a lot of art out there, and it comes in a wide variety. Consequently, given any supposed principle from neuroscience or psychology, one can nearly always cherry pick art pieces fitting it. What very few scientific studies do is attempt to quantitatively gauge whether the predicted feature is a general tendency across the arts. The fundamental difficulty on the “arts” side is that we often don’t have a good idea what facets of art are universal tendencies that need to be explained.

These difficulties for the brain and arts make the common modus operandus a poor way to make progress comprehending art and brain. What initially looks like neuroscientific principles being used to explain artistic phenomena is, more commonly, suspect brain principles being used to explain artistic phenomena that may not exist. (A second common approach to linking art and the brain sciences goes in the other direction: to begin with a piece of art, and then to cherry-pick principles from the brain sciences to explain it.)

How, then, should we move forward in our quest to understand the arts? Here I will suggest to you a path, one that addresses the brain and art difficulties above.

The “arts” difficulty can be overcome by identifying regularities actually found in the arts, whether universals, near-universals, or statistical tendencies. One reason large-scale measurements across the arts are not commonly carried out may be that any discipline of the arts tends to be vast and tremendously diverse, and it may seem prima facie unlikely that one will find any interesting regularity. With a strong stomach, however, it is often possible to collect enough data to capture a signal through the noise.

The “arts” difficulty, then, can be addressed by good-old-fashioned data collection, and distillation of empirical regularities. But even so, we are left with another big problem to overcome. “Good-old-fashioned data collection” involves more than simply collecting data. Which data should one collect? And which kinds of regularities should be sought after? Although it is well-known that data helps drive theory, it is not as widely appreciated that theory drives data. There’s effectively infinitely many ways of collecting data, and effectively unlimited ways of analyzing any set of data. Without theory as a guide, one is not likely to identify empirical regularities at all, much less ones that are interesting. Good-old-fashioned theory is required in good-old-fashioned data collection. We need predictions about empirical regularities, and then need to gather data in a manner designed to test the prediction.

But this brings us back to our first difficulty, the “brain” one. If we are so ignorant of the principles of the brain, then how can we hope to use it to make predictions about regularities in art?

We are, indeed, woefully ignorant of the brain, but we can make progress in explaining art. Here is the fundamental insight I believe we need: the arts have been culturally selected over time to be a “good fit” for our brain, and our brain has been naturally selected over time to be a good fit to nature …so, perhaps the arts have come to be shaped like nature, exactly the shape our brain came to be highly efficient at processing. For example, perhaps music has been culturally selected to be structured like some natural class of stimuli, a class of stimuli our auditory system evolved via natural selection to process. (See Figure 1.)

natural selection and cultural selection in shaping the brain

If the arts are as I describe just above – selected to harness our brains by mimicking nature – then we can pursue the origins of art without having to crack open the brain. We can, instead, focus our attention on the regularities found in nature, the regularities which our brains evolved to competently process. I’ll suggest in a moment that we can do exactly this, and give examples where I have been successful at doing so. But first let’s deal with a potential problem…

Don’t brains have quirks? And if so, couldn’t the arts tap into our quirks, and then no analysis of nature would help explain the arts? What do I mean by a quirk? Brains possess mechanisms selected to work well when the inputs to the mechanisms are natural. What happens when the inputs are not natural? That is, what happens when the inputs are of a kind the mechanism was not selected to accommodate? The answer is, “Who knows?!” The mechanism never was selected to accommodate non-natural inputs, and so the mechanism may carry out some arbitrary, inane computation.

To grasp what the mechanism does on these non-natural inputs, we may have no choice but to crack open the hardware and figure out how it actually works. If the arts tended to be culturally selected to tap into the brain’s quirks, then nature wouldn’t help us, and we’d be bound to the brain’s enigmatic details in our grasp of the arts.

There is, however, a good reason to suspect that cultural selection won’t try to harness the brain’s quirks, and the reason is this: quirks are stupid. When your brain mechanisms are running as nature “intended,” they are exceedingly sophisticated machines. When they are run on inputs not in their design specs, however, the behavior of the brain’s mechanisms (now quirks) are typically not intelligent at all. For example, the plastic fork in front of me is well-designed for muffin eating, and although I can comb my hair with it, it is a terribly designed comb. The quirks will usually be embarrassing in their lack of sophistication for any task. …because they weren’t designed for any task. And that’s fundamentally why we expect the arts to have culturally been selected to tap into our functional brain mechanisms, running roughly as nature “intended”.

If we can set aside the quirks, then we can side-step the brain in our attempt to grasp the origins of the arts. If I am correct about this, we can remove the most complicated object in the universe from the art equation!

With the brain put on the shelf, the goal is, instead, to analyze nature, and use it to explain the structure of the arts. Is this really possible? And isn’t nature just as complicated as the brain, or, at any rate, sufficiently complicated that we’re headed for despair?

No. Nature is filled with simple regularities, many of them having physics or mathematical foundations. And although it may not be trivial to discover them, our hopes should be far greater than our hopes for unraveling the brain’s mechanisms. Our presumption, then, is that our brains evolved to “know” these regularities of nature, and if we, as scientists, can unravel the regularities, we have thereby unraveled the brain’s competencies. What regularities from nature am I referring to? For the remainder of this piece, I’ll give you three brief examples from my research. Only one is explictly about the arts, but all three concern the cultural evolution of human artifacts, and how they harness our brains via mimicking nature. (See Figure 2.)

shaping culture to look like nature in cultural selection

The first concerns the origins of writing, and why letters are shaped as they are. Our visual systems evolved for more than a hundred million years to be highly competent at visually processing natural scenes. One of the most central features of these natural scenes was simply this: they are filled with opaque objects strewn about. And that is enough to lead to visual regularities in nature. For example, there are three junction types having two contours – L, T and X. Ls happen at many object corners, Ts when one edge goes behind an object, and these two are accordingly common in natural scenes. X, however, is rare in natural scenes.

Matching nature, letter shapes with L and T topologies are also common across languages, but X topologies rare. More generally, the shapes found more commonly in natural scenes are those found more commonly in writing systems. [See this SB piece for more: http://www.scientificblogging.com/mark_changizi/topography_language ]

The second concerns the origins of speech, and why speech sounds as it does. Our auditory systems evolved for tens of millions of years to be highly efficient at processing natural sounds.

Although nature consists of lots of sounds, one of the most fundamental categories of sound is this: solid-object events. Events among solid objects, it turns out, have rich regularities that one can work out. For starters, there are primarily three kinds of sound among solid objects: hits, slides and rings, the latter occurring as periodic vibrations of objects that have been involved in a physical interaction (namely a hit or a slide). Just as hit, slides and rings are the fundamental atoms of solid-object physical events, speech is built out of hits, slides and rings – called plosives, fricatives and sonorants. For another starter example, just as solid-object events consist of a physical interaction (hit or slide) followed by the resultant ring, the most fundamental simple structure across language is the syllable, most commonly of the CV, or consonant-sonorant form. More generally, and as I describe in my upcoming book, Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (2011), spoken languages share a wide variety of solid-object event signatures.

Written and spoken language look and sound like fundamental aspects of nature: opaque objects strewn about and solid-objects interacting with one another, respectively. Writing thereby harnesses our visual object-recognition mechanisms, and speech harnesses our event-recognition mechanisms. Neither opaque objects nor solid objects are especially evocative sources in nature, and that’s why the look of most writing and the sound of most speech is not evocative. [See this SciAm piece for more: http://www.scientificamerican.com/article.cfm?id=why-does-music-make-us-fe ]

Music – the third cultural production I have addressed with a nature-harnessing approach – is astoundingly evocative. What kind of story could I give here? A nature-harnessing theory would have to posit a class of natural auditory stimuli that music has culturally evolved to mimic, but haven’t I already dealt with nature’s sounds in my story for speech? In addition to general event recognition systems, we probably possess auditory mechanisms specifically designed for the recognition of human behavior. Human gait, I have argued, has signature patterns found in the regularities of rhythm. Doppler shifts of movers have regularities that one can work out, and these regularities are found in music’s melodic contours. And loudness modulations due to proximity predict how loudness is used in music.

These results are described in my upcoming book, Harnessed. For example, just as faster movers have a greater range of pitches from their directed-toward-you high pitch to their directed-away-from-you low pitch, faster tempo music tends to use a wider range of pitches for its melody. [See this SB piece for more: http://www.scientificblogging.com/mark_changizi/music_sounds_moving_people ]

structure of nature harnessing arguments for speech writing  and music

Many other aspects of the arts are potentially treatable in a similar fashion. For example, color vision, I have argued is optimized for detecting subtle spectral shifts in other people’s skin, indicating modulations in their emotion, mood or state. That is, color vision is a sense designed for the emotions of other people, and it is possible to understand the meanings of colors on this basis, e.g., red is strong because oxygenated hemoglobin is required for skin to display it. The visual arts are expected to have harnessed our brain’s color mechanisms via using colors as found in nature, namely principally as found on skin. Again, the strategy is to understand art without having to unravel the brain’s mechanisms.

One of the morals I want to convey is that you don’t have to be a neuroscientist to take a brain-based approach to art. The brain’s competencies can be ferreted out without going inside, by carving nature at its joints, just the joints the brain evolved to carve at. One can then search for signs of nature in the structure of the arts. My hope is that via the progress I have made for writing, speech and music, others will be motivated to take up the strategy for grappling with all facets of the arts, and cultural artifacts more generally.

This first appeared on March 4, 2010, as a feature at ScientificBlogging.com.


Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).

Read Full Post »

This first appeared on January 10, 2010, as a feature at ScientificBlogging.com.

Joggers love their head phones. If you ask them why, they’ll tell you it keeps them motivated. The right song can transform what is by all rights an arduous half hour of ascetic masochism into an exhilarating whirlwind (or, in my case, into what feels like only 25 minutes of ascetic masochism).

Music-driven joggers may be experiencing a pleasurable diversion, but to the joggers and bikers in their vicinity, they’re Tasmanian Devils.

In choosing to jog to the beat of someone else’s drum rather than their own, headphoned joggers have blinded themselves to the sounds of the other movers around them. Headphones don’t prevent joggers from deftly navigating the trees, stumps, curbs, and parked cars of the world because these things can be seen as one approaches. But when one moves in a world with other movers, things not currently in front of you can quickly come to be in front of you. This is where the headphoned jogger stumbles … and crashes into the crossing jogger, passing biker, or first-time tricycler.

These music-filled movers may be a menace to our streets, but they can serve to educate us all about one of our underappreciated powers: using sound alone, we know where people are around us, and the nature of their movement. I’m sitting in a coffee shop as I write this, and when I close my eyes I sense the movement all around me: a clop of boots just passed to my right; a jingling-key person just walked in front of me from my right to my left, and back; and a pitter patter of a child just meandered way out in front of me. I sense where they are, their direction of motion, and their speed. I also sense their gait, such as whether they are walking or running. And I can often tell more than this, such as a brisk versus shuffling walk, an angry stomp versus a happy prance, or even a complex behavior, like turning and stopping to drop a dirty tray in a bin, slowing to open a door, or reversing direction to get a forgotten coffee. My auditory system carries out these mover-detection computations even when I’m not consciously attending to them. That’s why I’m difficult to sneak up on (although they keep trying!), and that’s why I only rarely find myself saying, “How long has that cheerleading squad been doing jumping jacks behind me?!” That almost never happens to me because my auditory system is keeping track of where people are and roughly what they’re doing, even when I’m otherwise occupied.

We can now see why joggers with their ears unencumbered by headphones almost never crash into feral dogs or runaway wheelchaired grandpas: they may not see the dog or grandpa, but they hear their movement through space, and can dynamically modulate their running to avoid both, and be merrily on their way. Without headphones, joggers are highly sensitive to the sounds of cars, and can track their movement: that car is coming around the bend; the one over there is reversing directly toward me; the one above me is falling; and so on. Headphoned joggers, on the other hand, have turned off their movement-detection system, and should be passed with caution! And although they are a hazard to pedestrians and cyclists, the people they put at greatest risk are themselves. This is because where there are joggers there are often cars nearby, and in collisions between a jogger and an automobile, automobiles typically only need a power-wash to the grill.

How does your auditory system serve as a movement tracking system? In addition to sensing whether a mover is to your left or right, in front or behind, and above or below – something that depends on the shape, position and number of ears you have – you possess specialized auditory software that interprets the sounds of movers and generates a good guess as to the mover’s movement through space. Your software has evolved to give you four kinds of information about a mover: (i) his distance from you, (ii) his directedness toward you, (iii) his speed, and (iv) his behavior or gait. How, then, does your auditory system infer these four kinds of information?

Evidence suggests that (i) distance is gleaned from loudness, (ii) directedness toward you can be cued by pitch (due to subtle but detectable Doppler shifts), (iii) speed is inferred by the number of footsteps per second, and (iv) behavior and gait are read from the pattern of footsteps. Four fundamental parameters of human movement, and four kinds of auditory cue: (i) loudness, (ii) sound frequency, (iii) step rate, and (iv) gait pattern.

Your auditory system has evolved to track these cues because of the supreme value in knowing where and what everyone is doing nearby.

This is where things get interesting… Even though joggers without headphones are not listening to music, their auditory systems are listening to fundamentally music-like constituents. Consider the four auditory movement cues mentioned just above (and shown on the right of Figure 2). Loudness? That’s just pianissimo versus piano versus forte and so on. Sound frequency? That’s roughly pitch. Step rate? That’s tempo. And the gait pattern? That’s akin to rhythm and beat. The four fundamental auditory cues for movement are, then, awefully similar to (i) loudness, (ii) pitch, (iii) tempo, and (iv) rhythm.

These are the most fundamental ingredients of music, and yet, there they are in the sounds of human movers. The most informative sounds of human movers are the fundamental building blocks of music!

The importance of loudness, pitch, tempo and rhythm to both music and movement is, I believe, more than a coincidence. The similarity runs deep – something speculated on ever since the Greeks. Research in my lab has been providing evidence that music is built not just with the building blocks of movement, but is actually organized like movement, thereby harnessing our movement-recognition auditory mechanisms. The story this leads to for music is this: Music has been culturally selected to sound like people moving, just the kinds of sounds your auditory system evolved to be great at processing. …and just the kinds of sounds that can possess emotional content that makes music evocative and worth listening to.

Music is evocative because it is made with people, something I also wrote about here [ http://bit.ly/rcKVh ], and something I will discuss further in the future.

Headphoned joggers, then, aren’t merely missing out on the real movement around them – they pipe into their ears a fictional movement, making them even more hazardous than a jogger wearing earplugs.

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).

Read Full Post »

Older Posts »