Feeds:
Posts
Comments

Archive for the ‘Origins of music’ Category

The Library Journal has a short review by Cynthia Knight of my book, Harnessed.

Many scientists believe that the human brain’s capacity for language is innate, that the brain is actually “hard-wired” for this higher-level functionality. But theoretical neurobiologist Changizi (director of human cognition, 2AI Labs; The Vision Revolution) brilliantly challenges this view, claiming that language (and music) are neither innate nor instinctual to the brain but evolved culturally to take advantage of what the most ancient aspect of our brain does best: process the sounds of nature. By “sounds of nature,” Changizi does not mean birds chirping or rain falling. His provocative theory is based on the identification of striking similarities between the phoneme level of language and the elemental auditory properties of solid objects and, in the case of music, similarities between the sounds of human movement and the basic elements of music.

Verdict: Although the book is written in a witty, informal style, the science underpinning this theoretical argument (acoustics, phonology, physics) could be somewhat intimidating to the nonspecialist. Still, it will certainly intrigue evolutionary biologists, linguists, and cultural anthropologists and is strongly recommended for libraries that have Changizi’s previous book.

~~~

Mark Changizi is Director of Human Cognition at 2AI, and the author of
Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man and The Vision Revolution.

Read Full Post »

Daniel Levitin reviews my new book, Harnessed, in the WSJ. And he’s not happy about it.

Now, I’m not a fan of tit-for-tat responses to book reviews, so I’ll let you gauge Levitin’s arguments for yourself after reading my book.

But one casualty of his review is humor — or Levitin’s lack of recognition of it — and that I’ll correct here.

You see, in my book I boast, as Levitin tells us, “about classrooms of undergraduates standing in awe of” me.

What a start to a review! I’m painted as a boastful braggart on the first line of entry into ChangiziLand (“ChangiziLand” is where all my awe-filled followers live).

And, my god, it’s true! I indeed do say something along those lines! In fact, my own words now (p. 32):

“It can be difficult for students to attract my attention when I am lecturing. My occasional glances in their direction aren’t likely to notice a static arm raised in the standing-room-only lecture hall…”

What. An. Arse! …I’m referring to me.

Except– Wait. I wrote more.

“…and so they are reduced to jumping and gesturing wildly in the hope of catching my eye. And that’s why, whenever possible, I keep the house lights turned off.”

Well that’s peculiar. Are my students really “jumping and gesturing wildly”? Really? And do I actually turn the house lights off to prevent my having to view said wild gesturing?

Perhaps. Levitin doesn’t know me from Adam, so, uh, maybe that really happens in my lectures.

But here’s the fuller excerpt from that section…

It can be difficult for students to attract my attention when I am lecturing. My occasional glances in their direction aren’t likely to notice a static arm raised in the standing-room-only lecture hall, and so they are reduced to jumping and gesturing wildly in the hope of catching my eye. And that’s why, whenever possible, I keep the house lights turned off. There are, then, three reasons why my students have trouble visually signaling me: (i) they tend to be behind my head as I write on the chalkboard, (ii) many are occluded by other people, are listening from behind pillars, or are craning their necks out in the hallway, and (iii) they’re literally in the dark.

These three reasons are also the first ones that come to mind for why languages everywhere employ audition (with the secondary exceptions of writing and signed languages for the deaf) rather than vision. We cannot see behind us, through occlusions, or in the dark; but we can hear behind us, through occlusions, and in the dark. In situations where one or more of these — (i), (ii), and (iii) above — apply, vision fails, but audition is ideal. Between me and the students in my course lectures, all three of these conditions apply, and so vision is all but useless as a route to my attention. In such a scenario a student could develop a firsthand appreciation of the value of speech for orienting a listener. And if it weren’t for the fact that I wear headphones blasting Beethoven when I lecture, my students might actually learn this lesson.

And did you hear that last part? I jam to classical music during my lecturing so that I cannot possibly hear any questions from students. That’s just…impractical!

If it still wasn’t obvious that I was joking, several paragraphs further down I indicate — just for the barely-reading, I-already-think-Changizi-is-a-prick reader — that my earlier-mentioned gesticulating students are fictional.

~~~

Mark Changizi is God of Human Cognition at 2AI, and the author of most excellent books such as The Vision rEvolution and Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man.

Read Full Post »

Christine Ottery (that’s not her above) recently interviewed me about bare naked skin and the origins of color vision, and she wrote up her piece in Scientific American. Read it here.

Also, note the “Lady Gaga” connection in the piece. This is not the first time “Lady Gaga” has been all over my research — the words, not the actual woman. She also comes up in a story about my research on the origins of music, which you can read here at Gaga-galore.

Let’s keep up the pressure, and perhaps Lady Gaga will hire me as her scientific aesthetics advisor…

~~~

Mark Changizi is Director of Human Cognition at 2AI, and the author of The Vision Revolution (Benbella Books) and the upcoming book Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella Books). He is working on his fourth book at the moment, tentatively titled Making Faces, about emotions and facial expressions.

 

Read Full Post »

I believe that music sounds like people, moving. Yes, the idea may sound a bit crazy, but it’s an old idea, much discussed in the 20th century, and going all the way back to the Greeks. There are lots of things going for the theory, including that it helps us explain…

(1) why our brains are so good at absorbing music (…because we evolved to possess human-movement-detecting auditory mechanisms),

(2) why music emotionally moves us (…because human movement is often expressive of the mover’s mood or state), and

(3) why music gets us moving (…because we’re a social species prone to social contagion).

And as I describe in detail in my upcoming book — Harnessed: How Language and Music Mimicked Nature and Transformed Ape To Man — music has the signature auditory patterns of human movement (something I hint at in this older piece of mine).

Here I’d like to describe a novel way of thinking about what the meaning of music might be. Rather than dwelling on the sound of music, I’d like to focus on the look of music. In particular, what does our brain think music looks like?

It is natural to assume that the visual information streaming into our eyes determines the visual perceptions we end up with, and that the auditory information entering our ears determines the events we hear.

But the brain is more complicated than this. Visual and auditory information interact in the brain, and the brain utilizes both to guess the single scene to render a perception of. For example, the research of Ladan Shams, Yukiyasu Kamitani and Shinsuke Shimojo at Caltech have shown that we perceive a single flash as a double flash if it is paired with a double beep. And Robert Sekuler and others from Brandeis University have shown that if a sound occurs at the time when two balls pass through each other on screen, the balls are instead perceived to have collided and reversed direction.

These and other results of this kind demonstrate the interconnectedness of visual and auditory information in our brain. Visual ambiguity can be reduced with auditory information, and vice versa. And, generally, both are brought to bear in the brain’s attempt to infer the best guess about what’s out there.

Your brain does not, then, consist of independent visual and auditory systems, with separate troves of visual and auditory “knowledge” about the world. Instead, vision and audition talk to one another, and there are regions of cortex responsible for making vision and audition fit one another.

These regions know about the sounds of looks and the looks of sounds.

Because of this, when your brain hears something but cannot see it, your brain does not just sit by and refrain from guessing what it might have looked like.

When your auditory system makes sense of something, it will have a tendency to activate visual areas, eliciting imagery of its best guess as to the appearance of the stuff making the sound.

For example, the sound of your neighbor’s rustling tree may spring to mind an image of its swaying lanky branches. The whine of your cat heard far way may evoke an image of it stuck up high in that tree. And the pumping of your neighbor’s kid’s BB gun can bring forth an image of the gun being pointed at Foofy way up there.

Your visual system has, then, strong opinions about the proper look of the things it hears.

And, bringing ourselves back to music, we can use the visual system’s strong opinions as a means for gauging music’s meaning.

In particular, we can ask your visual system what it thinks the appropriate visual is for music.

If, for example, the visual system responds to music with images of beating hearts, then it would suggest, to my disbelief, that music mimics the sounds of heartbeats. If, instead, the visual system responds with images of pornography, then it would suggest that music sounds like sex. You get the idea.

But in order to get the visual system to act like an oracle, we need to get it to speak. How are we to know what the visual system thinks music looks like?

One approach is to simply ask which visuals are, in fact, associated with music? For example, when people create imagery of musical notes, what does it look like? One cheap way to look into this is simply to do a Google (or any search engine) image search on the term “musical notes.” You might think such a search would merely return images of simple notes on the page.

However, that is not what one finds. To my surprise, actually, most of the images are like the one in the nearby figure, with notes drawn in such a way that they appear to be moving through space.

Notes in musical notation never actually look anything like this, and real musical notes have no look at all (because they are sounds). And yet we humans seem to be prone to visually depicting notes as moving all about.

music, movement, notes 

Music tends to be depicted as moving.

Could these images of notes in motion be due to a more mundane association?

Music is played by people, and people have to move in order to play their instrument. Could this be the source of the movement-music association? I don’t think so, because the movement suggested in these images of notes doesn’t look like an instrument being played. In fact, it is common to show images of an instrument with the notes beginning their movement through space from the instrument: these notes are on their way somewhere, not an indication of the musician’s key-pressing or back-and-forth movements.

Could it be that the musical notes are depicted as moving through space because sound waves move through space? The difficulty with this hypothesis is that all sound moves through space. All sound would, if this were the case, be visually rendered as moving through space, but that’s not the case. For example, speech is not usually visually rendered as moving through space. Another difficulty is that the musical notes are usually meandering in these images, but sound waves are not meandering — sound waves go straight. A third problem with sound waves underlying the visual metaphor is that we never see sound waves in the first place.

Another possible counter-hypothesis is that the depiction of visual movement in the images of musical notes is because all auditory stimuli are caused by underlying events with movement of some kind. The first difficulty, as was the case for sound waves, is that it is not the case that all sound is visually rendered in motion. The second difficulty is that, while it is true that sounds typically require movement of some kind, it need not be movement of the entire object through space. Moving parts within the object may make the noise, without the object going anywhere. In fact, the three examples I gave earlier — leaves rustling, Foofy whining, and the BB gun pumping — are noises without any bulk movement of the object (the tree, Foofy, and the BB gun, respectively). The musical notes in imagery, on the other hand, really do seem to be moving, in bulk, across space.

Music is like tree-rustling, Foofy, BB guns and human speech in that it is not made via bulk movement through space. And yet music appears to be unique in this tendency to be visually depicted as moving through space.

In addition, not only are musical notes rendered as in motion, musical notes tend to be depected as meandering.

When visually rendered, music looks alive and in motion (often along the ground), just what one might expect if music’s secret is that it sounds like people moving.

A Google Image search on “musical notes” is one means by which one may attempt to discern what the visual system thinks music looks like, but another is to simply ask ourselves what is the most common visual display shown during music. That is, if people were to put videos to music, what would the videos tend to look like?

Lucky for us, people do put videos to music! They’re called music videos, of course. And what do they look like?

The answer is so obvious that it hardly seems worth noting: music videos tend to show people moving about, usually in a time-locked fashion to the music, very often dancing.

As obvious as it is that music videos typically show people moving, we must remember to ask ourselves why music isn’t typically visually associated with something very different. Why aren’t music videos mostly of rivers, avalanches, car races, wind-blown grass, lion hunts, fire, or bouncing balls?

It is because, I am suggesting, our brain thinks that humans moving about is what music should look like…because it thinks that humans moving about is what music sounds like.

Musical notes are rendered as meandering through space. Music videos are built largely from people moving, and in a time-locked manner to the music. That’s beginning to suggest that the visual system is under the impression that music sounds like human movement.

But if that’s really what the visual system thinks, then it should have more opinions than simply that music sounds like movement. It should have opinions about what, more exactly, the movement should look like.

Do our visual systems have opinions this precise? Are we picky about the mover that’s put to music?

You bet we are! That’s choreography. It’s not enough to play a video of the Nutcracker ballet during Beatles music, nor will it suffice to play a video of the Nutcracker to the music of Nutcracker, but with a small time lag between them. The video of human movement has to have all the right moves at the right time to be the right fit for the music.

These strong opinions about what music looks like make perfect sense if music mimics human movement sounds. In real life, when people carry out complex behaviors, their visual movements are tightly choreographed with the sounds – because the sight and sound are due to the same event. When you hear movement, you expect to see that same movement. Music sounds to your brain like human movement, which is why when your brain hears music, it expects that any visual of it should be consistent with it.

~~~~~~

This was adapted from Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella Books, 2011). It first appeared July 26, 2010, at Psychology Today.

Mark Changizi is Professor of Human Cognition at 2ai, and author of The Vision Revolution.

Read Full Post »

It is my pleasure to announce that my upcoming book, HARNESSED (Benbella, 2011) can now be pre-ordered at Amazon!

It is about how we came to have language and music. …about how we became modern humans. See http://changizi.wordpress.com/book-harnessed/ for more about the book.

~~~

Mark Changizi is Professor of Human Cognition at 2AI, and the author of The Vision Revolution (Benbella Books) and the upcoming book Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella Books).

Read Full Post »

I believe that music sounds like people, moving.

Yes, the idea may sound a bit crazy, but it’s an old idea, much discussed in the 20th century, and going all the way back to the Greeks. There are lots of things going for the theory, including that it helps us explain (1) why our brains are so good at absorbing music (…because we evolved to possess human-movement-detecting auditory mechanisms), (2) why music emotionally moves us (…because human movement is often expressive of the mover’s mood or state), and (3) why music gets us moving (…because we’re a social species prone to social contagion).

And as I describe in detail in my upcoming book – “Harnessed: How Language and Music Mimicked Nature and Transformed Ape To Man” – music has the signature auditory patterns of human movement (something I hint at here http://www.science20.com/mark_changizi/music_sounds_moving_people ).

Here I’d like to describe a novel way of thinking about what the meaning of music might be.

Rather than dwelling on the sound of music, I’d like to focus on the look of music.

In particular, what does our brain think music looks like?

It is natural to assume that the visual information streaming into our eyes determines the visual perceptions we end up with, and that the auditory information entering our ears determines the events we hear. But the brain is more complicated than this. Visual and auditory information interact in the brain, and the brain utilizes both to guess the single scene to render a perception of. For example, the research of Ladan Shams, Yukiyasu Kamitani and Shinsuke Shimojo at Caltech have shown that we perceive a single flash as a double flash if it is paired with a double beep. And Robert Sekuler and others from Brandeis University have shown that if a sound occurs at the time when two balls pass through each other on screen, the balls are instead perceived to have collided and reversed direction. These and other results of this kind demonstrate the interconnectedness of visual and auditory information in our brain. Visual ambiguity can be reduced with auditory information, and vice versa. And, generally, both are brought to bear in the brain’s attempt to infer the best guess about what’s out there.

Your brain does not, then, consist of independent visual and auditory systems, with separate troves of visual and auditory “knowledge” about the world. Instead, vision and audition talk to one another, and there are regions of cortex responsible for making vision and audition fit one another. These regions know about the sounds of looks and the looks of sounds. Because of this, when your brain hears something but cannot see it, your brain does not just sit by and refrain from guessing what it might have looked like. When your auditory system makes sense of something, it will have a tendency to activate visual areas, eliciting imagery of its best guess as to the appearance of the stuff making the sound. For example, the sound of your neighbor’s rustling tree may spring to mind an image of its swaying lanky branches. The whine of your cat heard far way may evoke an image of it stuck up high in that tree. And the pumping of your neighbor’s kid’s BB gun can bring forth an image of the gun being pointed at Foofy way up there.

Your visual system has, then, strong opinions about the proper look of the things it hears. And, bringing ourselves back to music, we can use the visual system’s strong opinions as a means for gauging music’s meaning. In particular, we can ask your visual system what it thinks the appropriate visual is for music. If, for example, the visual system responds to music with images of beating hearts, then it would suggest, to my disbelief, that music mimics the sounds of heartbeats. If, instead, the visual system responds with images of pornography, then it would suggest that music sounds like sex. You get the idea.

But in order to get the visual system to act like an oracle, we need to get it to speak. How are we to know what the visual system thinks music looks like? One approach is to simply ask which visuals are, in fact, associated with music? For example, when people create imagery of musical notes, what does it look like? One cheap way to look into this is simply to do a Google (or any search engine) image search on the term “musical notes.” You might think such a search would merely return images of simple notes on the page. However, that is not what one finds. To my surprise, actually, most of the images are like the one in the nearby figure, with notes drawn in such a way that they appear to be moving through space. Notes in musical notation never actually look anything like this, and real musical notes have no look at all (because they are sounds). And yet we humans seem to be prone to visually depicting notes as moving all about.

Could these images of notes in motion be due to a more mundane association? Music is played by people, and people have to move in order to play their instrument. Could this be the source of the movement-music association? I don’t think so, because the movement suggested in these images of notes doesn’t look like an instrument being played. In fact, it is common to show images of an instrument with the notes beginning their movement through space from the instrument: these notes are on their way somewhere, not an indication of the musician’s key-pressing or back-and-forth movements.

Could it be that the musical notes are depicted as moving through space because sound waves move through space? The difficulty with this hypothesis is that all sound moves through space. All sound would, if this were the case, be visually rendered as moving through space, but that’s not the case. For example, speech is not usually visually rendered as moving through space. Another difficulty is that the musical notes are usually meandering in these images, but sound waves are not meandering – sound waves go straight. A third problem with sound waves underlying the visual metaphor is that we never see sound waves in the first place.

Another possible counter-hypothesis is that the depiction of visual movement in the images of musical notes is because all auditory stimuli are caused by underlying events with movement of some kind. The first difficulty, as was the case for sound waves, is that it is not the case that all sound is visually rendered in motion. The second difficulty is that, while it is true that sounds typically require movement of some kind, it need not be movement of the entire object through space. Moving parts within the object may make the noise, without the object going anywhere. In fact, the three examples I gave earlier – leaves rustling, Foofy whining, and the BB gun pumping – are noises without any bulk movement of the object (the tree, Foofy, and the BB gun, respectively).  The musical notes in imagery, on the other hand, really do seem to be moving, in bulk, across space.

Music is like tree-rustling, Foofy, BB guns and human speech in that it is not made via bulk movement through space.  And yet music appears to be unique in this tendency to be visually depicted as moving through space. In addition, not only are musical notes rendered as in motion, musical notes tend to be depected as meandering.

When visually rendered, music looks alive and in motion (often along the ground), just what one might expect if music’s secret is that it sounds like people moving.

A Google Image search on “musical notes” is one means by which one may attempt to discern what the visual system thinks music looks like, but another is to simply ask ourselves what is the most common visual display shown during music. That is, if people were to put videos to music, what would the videos tend to look like?

Lucky for us, people do put videos to music! They’re called music videos, of course. And what do they look like? The answer is so obvious that it hardly seems worth noting: music videos tend to show people moving about, usually in a time-locked fashion to the music, very often dancing.

As obvious as it is that music videos typically show people moving, we must remember to ask ourselves why music isn’t typically visually associated with something very different. Why aren’t music videos mostly of rivers, avalanches, car races, wind-blown grass, lion hunts, fire, or bouncing balls? It is because, I am suggesting, our brain thinks that humans moving about is what music should look like…because it thinks that humans moving about is what music sounds like.

Musical notes are rendered as meandering through space. Music videos are built largely from people moving, and in a time-locked manner to the music. That’s beginning to suggest that the visual system is under the impression that music sounds like human movement. But if that’s really what the visual system thinks, then it should have more opinions than simply that music sounds like movement. It should have opinions about what, more exactly, the movement should look like. Do our visual systems have opinions this precise? Are we picky about the mover that’s put to music?

You bet we are! That’s choreography. It’s not enough to play a video of the Nutcracker ballet during Beatles music, nor will it suffice to play a video of the Nutcracker to the music of Nutcracker, but with a small time lag between them. The video of human movement has to have all the right moves at the right time to be the right fit for the music.

These strong opinions about what music looks like make perfect sense if music mimics human movement sounds. In real life, when people carry out complex behaviors, their visual movements are tightly choreographed with the sounds – because the sight and sound are due to the same event. When you hear movement, you expect to see that same movement. Music sounds to your brain like human movement, which is why when your brain hears music, it expects that any visual of it should be consistent with it.

~~~

This was adapted from Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella Books,2011).

~~~

This first appeared July 28, 2010, at Science 2.0.

Mark Changizi is Professor of Human Cognition at 2AI, and the author of The Vision Revolution (Benbella Books) and the upcoming book Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella Books).

Read Full Post »

There’s a good chance that you’re listening to music while reading this, and if you happen not to be, my bet is that you listen to music in the car, or at home, or while jogging. In all likelihood, you love music – simply love it.

Why?  What is it about those auditory patterns counting as “music” that makes us relish it so?

I have my own opinion about the answer, the topic of my recently finished book that will appear next year, Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man. I’ll give you a hint as to my view at the end of this piece, but what I’d like to do in this piece is to put forth four hurdles I believe any theory of music must leap over.

Brain: Why do we have a brain for music?
Emotion: Why is music emotionally evocative?
Dance: Why do we dance?
Structure: Why is music structurally organized as it is?

If a theory can answer all four questions, then I believe we should start paying attention.

To help clarify what I mean by these questions, let’s run through them in the context of a particular lay theory of music, namely the “heartbeat” theory of music. Although there is probably not just a single heartbeat theory put forth by lay people, the main motivation appears to be that a heart carries a beat, something fundamental to music. Of course, we don’t typically hear our own heartbeat, much less others, so when it is fleshed out I have heard it suggested that it comes from our in-utero days. One of the constants of the good fetus life was Momma’s heartbeat, and music takes us back to the oceanic, one-with-the-universe feelings we long ago lost. I’m not suggesting this is a good theory, by any means, but it will aid me in illustrating the four hurdles.  I would be hesitant, by the way, to call this “lub-dub” theory of music crazy – our understanding of the origins of music is so woeful that any non-spooky theory is worth a look. Let’s see how lub-dubs fare with our four hurdles for a theory of music.

The first hurdle was this: “Why do we have a brain for music?” That is, why are our brains capable of processing music? For example, fax machines are designed to process the auditory modulations occurring in fax machine communication, but to our ears fax machines sound like a fairly continuous screechy-brrr – we don’t have brains capable of processing fax machine sounds. Music may well sound homogeneously screechy-brrrey to non-human ears, but it sounds richly dynamic and structured to our ears. How might the lub-dub theorist answer why we have a brain for music?

Best I can figure, the lub-dubber could say that our in-utero days of warmth and comfort get strongly associated to Momma’s heartbeat, and the musical beat taps into those associations, bringing back warm fetus feelings.

One difficulty for this hypothesis is that learned associations often don’t last forever, so why would those Momma’s-heartbeat associations be so strong among adults? There are lots of beat-like stimuli out of the womb: some are nice, some are not nice. Why wouldn’t those out-of-the-womb sounds become the dominant association, with the Momma’s heartbeat washed away? And if Momma’s lub-dubs are, for some reason, not washed away, then why aren’t there other in-utero experiences that forever stay with us? Why don’t we, say, like to wear artificial umbilical cords, thereby bringing forth recollections of the womb? “Cuddle with your umbilicus just like the old days. You’ll sleep better. Guaranteed!” And why, at any rate, do we think we were so happy in the womb?  Maybe those days, supposing they leave any trace at all, are associated with nothing whatsoever. (Or perhaps with horror.) The lub-dub theory of music does not have a plausible story for why we have a brain ready and excited to soak up a beat.

The lub-dub theory of music origins also comes up short on the second major demand on a theory of music – that it explain why music is evocative, or emotional.  Heartbeat sounds amount to a one-dimensional parameter – faster or slower rate – and are not sufficiently rich to capture much of the range of human emotion.  Accordingly, heartbeats won’t help much in explaining the range of emotions music can elicit in listeners.

Psychophysiologists who look for physiological correlates of emotion take a variety of measurements (e.g., heart rate, blood pressure, skin conductance), not just one. Heart sounds aren’t rich enough to tug at all music’s heart strings.

Heartbeats also fail the “dance” hurdle. The “dance” requirement is that we explain why it is that music should elicit dance. This fundamental fact about music is a strange thing for sounds to do. In fact, it is a strange thing for any stimulus to do, in any modality. For lub-dubs, the difficulty for the dance hurdle is that even if lub-dubs were fondly recalled by us, and even if they managed to elicit a wide range of emotions, we  would have no idea why it should provoke post-uterin people to move, given that even fetuses don’t move to Momma’s heartbeat.

The final requirement of a theory of music is that it explain the structure of music, a tall order. Lub-dubs do have a beat, of course, but heartbeats are far too simple to possibly explain the many other structural regularities found in music. For starters, where is the melody?

Sorry, Mom. Thanks for the good times in your uterus, but I’m afraid your heartbeats are not the source of my fascination with music.

To tip my hand on my upcoming book, my view is that music has been culturally selected over time to sound like human movement, something I have also hinted at in the following pieces…

http://www.scientificblogging.com/mark_changizi/music_sounds_moving_people

http://changizi.wordpress.com/2009/09/25/scientific-american-piece-why-does-music-make-us-feel/

We have a brain for music because auditory mechanisms for recognizing what people are doing around us are clearly advantageous, and were selected for. Music is evocative because it sounds like human behaviors, many which are expressive in their nature. Music gets us dancing because we social apes are prone to mimic the movements of others. And, finally, the movement theory is sufficiently powerful that it can explain a lot of the structure of music – that requires much of the my book to describe. I admit that my hypothesis sounds implausible, and I ask that you wait to hear the book-length argument for it.

This first appeared on April 6, 2010, as a feature at Science 2.0

=============

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).


Read Full Post »

A generation ago it was only a brave eclectic minority of psychologists and neuroscientists who dared to address the arts. Things have changed considerably since then. “Art and brain” is now a legitimate and respected target of study, and is approached from a variety of viewpoints, from reductionistic neurophysiology to evolutionary approaches.

Things have changed so quickly that late 20th century conversations about how to create stronger art-science collaborations and connections are dated only a decade later – everyone’s already doing it! And the new generation of students being trained are at home in both the arts and sciences in a way that was rare before.

Although we are all now more culturally comfortable bathing in conversations about art and brain, are we making progress? Has looking into the brain helped us make sense of the arts? Here I will briefly explain why I believe we have made little progress. And then I will propose an alternative route to understanding art and its origins.

Perhaps the most common modus operandi in the cognitive and brain sciences approach to art is (i) to point to some known principle of brain science, and then (ii) to provide examples of art showing conformance with that principle. As fun as it may be to read explanations of art of this kind, the approach suffers from two fundamental difficulties – one on the brain side, one on the arts side.

Let’s start with the “brain” difficulty, which is simply this: we don’t understand the brain. Although the field is jam-packed with fantastically clever experiments giving us fascinating and often valid data, there is usually very little agreement (or ought to be little agreement) about how to distill the data into broad principles. And the broader and higher-level the supposed principle, the more controversial and difficult-to-defend it is. Consequently, most of the supposed principles in the brain sciences remotely rich enough to inform us about the arts are deeply questionable.

If we are so ignorant of the brain, why is the modus operandus above sometimes seemingly able to explain art? There is a lot of art out there, and it comes in a wide variety. Consequently, given any supposed principle from neuroscience or psychology, one can nearly always cherry pick art pieces fitting it. What very few scientific studies do is attempt to quantitatively gauge whether the predicted feature is a general tendency across the arts. The fundamental difficulty on the “arts” side is that we often don’t have a good idea what facets of art are universal tendencies that need to be explained.

These difficulties for the brain and arts make the common modus operandus a poor way to make progress comprehending art and brain. What initially looks like neuroscientific principles being used to explain artistic phenomena is, more commonly, suspect brain principles being used to explain artistic phenomena that may not exist. (A second common approach to linking art and the brain sciences goes in the other direction: to begin with a piece of art, and then to cherry-pick principles from the brain sciences to explain it.)

How, then, should we move forward in our quest to understand the arts? Here I will suggest to you a path, one that addresses the brain and art difficulties above.

The “arts” difficulty can be overcome by identifying regularities actually found in the arts, whether universals, near-universals, or statistical tendencies. One reason large-scale measurements across the arts are not commonly carried out may be that any discipline of the arts tends to be vast and tremendously diverse, and it may seem prima facie unlikely that one will find any interesting regularity. With a strong stomach, however, it is often possible to collect enough data to capture a signal through the noise.

The “arts” difficulty, then, can be addressed by good-old-fashioned data collection, and distillation of empirical regularities. But even so, we are left with another big problem to overcome. “Good-old-fashioned data collection” involves more than simply collecting data. Which data should one collect? And which kinds of regularities should be sought after? Although it is well-known that data helps drive theory, it is not as widely appreciated that theory drives data. There’s effectively infinitely many ways of collecting data, and effectively unlimited ways of analyzing any set of data. Without theory as a guide, one is not likely to identify empirical regularities at all, much less ones that are interesting. Good-old-fashioned theory is required in good-old-fashioned data collection. We need predictions about empirical regularities, and then need to gather data in a manner designed to test the prediction.

But this brings us back to our first difficulty, the “brain” one. If we are so ignorant of the principles of the brain, then how can we hope to use it to make predictions about regularities in art?

We are, indeed, woefully ignorant of the brain, but we can make progress in explaining art. Here is the fundamental insight I believe we need: the arts have been culturally selected over time to be a “good fit” for our brain, and our brain has been naturally selected over time to be a good fit to nature …so, perhaps the arts have come to be shaped like nature, exactly the shape our brain came to be highly efficient at processing. For example, perhaps music has been culturally selected to be structured like some natural class of stimuli, a class of stimuli our auditory system evolved via natural selection to process. (See Figure 1.)

natural selection and cultural selection in shaping the brain

If the arts are as I describe just above – selected to harness our brains by mimicking nature – then we can pursue the origins of art without having to crack open the brain. We can, instead, focus our attention on the regularities found in nature, the regularities which our brains evolved to competently process. I’ll suggest in a moment that we can do exactly this, and give examples where I have been successful at doing so. But first let’s deal with a potential problem…

Don’t brains have quirks? And if so, couldn’t the arts tap into our quirks, and then no analysis of nature would help explain the arts? What do I mean by a quirk? Brains possess mechanisms selected to work well when the inputs to the mechanisms are natural. What happens when the inputs are not natural? That is, what happens when the inputs are of a kind the mechanism was not selected to accommodate? The answer is, “Who knows?!” The mechanism never was selected to accommodate non-natural inputs, and so the mechanism may carry out some arbitrary, inane computation.

To grasp what the mechanism does on these non-natural inputs, we may have no choice but to crack open the hardware and figure out how it actually works. If the arts tended to be culturally selected to tap into the brain’s quirks, then nature wouldn’t help us, and we’d be bound to the brain’s enigmatic details in our grasp of the arts.

There is, however, a good reason to suspect that cultural selection won’t try to harness the brain’s quirks, and the reason is this: quirks are stupid. When your brain mechanisms are running as nature “intended,” they are exceedingly sophisticated machines. When they are run on inputs not in their design specs, however, the behavior of the brain’s mechanisms (now quirks) are typically not intelligent at all. For example, the plastic fork in front of me is well-designed for muffin eating, and although I can comb my hair with it, it is a terribly designed comb. The quirks will usually be embarrassing in their lack of sophistication for any task. …because they weren’t designed for any task. And that’s fundamentally why we expect the arts to have culturally been selected to tap into our functional brain mechanisms, running roughly as nature “intended”.

If we can set aside the quirks, then we can side-step the brain in our attempt to grasp the origins of the arts. If I am correct about this, we can remove the most complicated object in the universe from the art equation!

With the brain put on the shelf, the goal is, instead, to analyze nature, and use it to explain the structure of the arts. Is this really possible? And isn’t nature just as complicated as the brain, or, at any rate, sufficiently complicated that we’re headed for despair?

No. Nature is filled with simple regularities, many of them having physics or mathematical foundations. And although it may not be trivial to discover them, our hopes should be far greater than our hopes for unraveling the brain’s mechanisms. Our presumption, then, is that our brains evolved to “know” these regularities of nature, and if we, as scientists, can unravel the regularities, we have thereby unraveled the brain’s competencies. What regularities from nature am I referring to? For the remainder of this piece, I’ll give you three brief examples from my research. Only one is explictly about the arts, but all three concern the cultural evolution of human artifacts, and how they harness our brains via mimicking nature. (See Figure 2.)

shaping culture to look like nature in cultural selection

The first concerns the origins of writing, and why letters are shaped as they are. Our visual systems evolved for more than a hundred million years to be highly competent at visually processing natural scenes. One of the most central features of these natural scenes was simply this: they are filled with opaque objects strewn about. And that is enough to lead to visual regularities in nature. For example, there are three junction types having two contours – L, T and X. Ls happen at many object corners, Ts when one edge goes behind an object, and these two are accordingly common in natural scenes. X, however, is rare in natural scenes.

Matching nature, letter shapes with L and T topologies are also common across languages, but X topologies rare. More generally, the shapes found more commonly in natural scenes are those found more commonly in writing systems. [See this SB piece for more: http://www.scientificblogging.com/mark_changizi/topography_language ]

The second concerns the origins of speech, and why speech sounds as it does. Our auditory systems evolved for tens of millions of years to be highly efficient at processing natural sounds.

Although nature consists of lots of sounds, one of the most fundamental categories of sound is this: solid-object events. Events among solid objects, it turns out, have rich regularities that one can work out. For starters, there are primarily three kinds of sound among solid objects: hits, slides and rings, the latter occurring as periodic vibrations of objects that have been involved in a physical interaction (namely a hit or a slide). Just as hit, slides and rings are the fundamental atoms of solid-object physical events, speech is built out of hits, slides and rings – called plosives, fricatives and sonorants. For another starter example, just as solid-object events consist of a physical interaction (hit or slide) followed by the resultant ring, the most fundamental simple structure across language is the syllable, most commonly of the CV, or consonant-sonorant form. More generally, and as I describe in my upcoming book, Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (2011), spoken languages share a wide variety of solid-object event signatures.

Written and spoken language look and sound like fundamental aspects of nature: opaque objects strewn about and solid-objects interacting with one another, respectively. Writing thereby harnesses our visual object-recognition mechanisms, and speech harnesses our event-recognition mechanisms. Neither opaque objects nor solid objects are especially evocative sources in nature, and that’s why the look of most writing and the sound of most speech is not evocative. [See this SciAm piece for more: http://www.scientificamerican.com/article.cfm?id=why-does-music-make-us-fe ]

Music – the third cultural production I have addressed with a nature-harnessing approach – is astoundingly evocative. What kind of story could I give here? A nature-harnessing theory would have to posit a class of natural auditory stimuli that music has culturally evolved to mimic, but haven’t I already dealt with nature’s sounds in my story for speech? In addition to general event recognition systems, we probably possess auditory mechanisms specifically designed for the recognition of human behavior. Human gait, I have argued, has signature patterns found in the regularities of rhythm. Doppler shifts of movers have regularities that one can work out, and these regularities are found in music’s melodic contours. And loudness modulations due to proximity predict how loudness is used in music.

These results are described in my upcoming book, Harnessed. For example, just as faster movers have a greater range of pitches from their directed-toward-you high pitch to their directed-away-from-you low pitch, faster tempo music tends to use a wider range of pitches for its melody. [See this SB piece for more: http://www.scientificblogging.com/mark_changizi/music_sounds_moving_people ]

structure of nature harnessing arguments for speech writing  and music

Many other aspects of the arts are potentially treatable in a similar fashion. For example, color vision, I have argued is optimized for detecting subtle spectral shifts in other people’s skin, indicating modulations in their emotion, mood or state. That is, color vision is a sense designed for the emotions of other people, and it is possible to understand the meanings of colors on this basis, e.g., red is strong because oxygenated hemoglobin is required for skin to display it. The visual arts are expected to have harnessed our brain’s color mechanisms via using colors as found in nature, namely principally as found on skin. Again, the strategy is to understand art without having to unravel the brain’s mechanisms.

One of the morals I want to convey is that you don’t have to be a neuroscientist to take a brain-based approach to art. The brain’s competencies can be ferreted out without going inside, by carving nature at its joints, just the joints the brain evolved to carve at. One can then search for signs of nature in the structure of the arts. My hope is that via the progress I have made for writing, speech and music, others will be motivated to take up the strategy for grappling with all facets of the arts, and cultural artifacts more generally.

This first appeared on March 4, 2010, as a feature at ScientificBlogging.com.

=============

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).

Read Full Post »

This first appeared on January 10, 2010, as a feature at ScientificBlogging.com.

Joggers love their head phones. If you ask them why, they’ll tell you it keeps them motivated. The right song can transform what is by all rights an arduous half hour of ascetic masochism into an exhilarating whirlwind (or, in my case, into what feels like only 25 minutes of ascetic masochism).

Music-driven joggers may be experiencing a pleasurable diversion, but to the joggers and bikers in their vicinity, they’re Tasmanian Devils.

In choosing to jog to the beat of someone else’s drum rather than their own, headphoned joggers have blinded themselves to the sounds of the other movers around them. Headphones don’t prevent joggers from deftly navigating the trees, stumps, curbs, and parked cars of the world because these things can be seen as one approaches. But when one moves in a world with other movers, things not currently in front of you can quickly come to be in front of you. This is where the headphoned jogger stumbles … and crashes into the crossing jogger, passing biker, or first-time tricycler.

These music-filled movers may be a menace to our streets, but they can serve to educate us all about one of our underappreciated powers: using sound alone, we know where people are around us, and the nature of their movement. I’m sitting in a coffee shop as I write this, and when I close my eyes I sense the movement all around me: a clop of boots just passed to my right; a jingling-key person just walked in front of me from my right to my left, and back; and a pitter patter of a child just meandered way out in front of me. I sense where they are, their direction of motion, and their speed. I also sense their gait, such as whether they are walking or running. And I can often tell more than this, such as a brisk versus shuffling walk, an angry stomp versus a happy prance, or even a complex behavior, like turning and stopping to drop a dirty tray in a bin, slowing to open a door, or reversing direction to get a forgotten coffee. My auditory system carries out these mover-detection computations even when I’m not consciously attending to them. That’s why I’m difficult to sneak up on (although they keep trying!), and that’s why I only rarely find myself saying, “How long has that cheerleading squad been doing jumping jacks behind me?!” That almost never happens to me because my auditory system is keeping track of where people are and roughly what they’re doing, even when I’m otherwise occupied.

We can now see why joggers with their ears unencumbered by headphones almost never crash into feral dogs or runaway wheelchaired grandpas: they may not see the dog or grandpa, but they hear their movement through space, and can dynamically modulate their running to avoid both, and be merrily on their way. Without headphones, joggers are highly sensitive to the sounds of cars, and can track their movement: that car is coming around the bend; the one over there is reversing directly toward me; the one above me is falling; and so on. Headphoned joggers, on the other hand, have turned off their movement-detection system, and should be passed with caution! And although they are a hazard to pedestrians and cyclists, the people they put at greatest risk are themselves. This is because where there are joggers there are often cars nearby, and in collisions between a jogger and an automobile, automobiles typically only need a power-wash to the grill.

How does your auditory system serve as a movement tracking system? In addition to sensing whether a mover is to your left or right, in front or behind, and above or below – something that depends on the shape, position and number of ears you have – you possess specialized auditory software that interprets the sounds of movers and generates a good guess as to the mover’s movement through space. Your software has evolved to give you four kinds of information about a mover: (i) his distance from you, (ii) his directedness toward you, (iii) his speed, and (iv) his behavior or gait. How, then, does your auditory system infer these four kinds of information?

Evidence suggests that (i) distance is gleaned from loudness, (ii) directedness toward you can be cued by pitch (due to subtle but detectable Doppler shifts), (iii) speed is inferred by the number of footsteps per second, and (iv) behavior and gait are read from the pattern of footsteps. Four fundamental parameters of human movement, and four kinds of auditory cue: (i) loudness, (ii) sound frequency, (iii) step rate, and (iv) gait pattern.

Your auditory system has evolved to track these cues because of the supreme value in knowing where and what everyone is doing nearby.

This is where things get interesting… Even though joggers without headphones are not listening to music, their auditory systems are listening to fundamentally music-like constituents. Consider the four auditory movement cues mentioned just above (and shown on the right of Figure 2). Loudness? That’s just pianissimo versus piano versus forte and so on. Sound frequency? That’s roughly pitch. Step rate? That’s tempo. And the gait pattern? That’s akin to rhythm and beat. The four fundamental auditory cues for movement are, then, awefully similar to (i) loudness, (ii) pitch, (iii) tempo, and (iv) rhythm.

These are the most fundamental ingredients of music, and yet, there they are in the sounds of human movers. The most informative sounds of human movers are the fundamental building blocks of music!

The importance of loudness, pitch, tempo and rhythm to both music and movement is, I believe, more than a coincidence. The similarity runs deep – something speculated on ever since the Greeks. Research in my lab has been providing evidence that music is built not just with the building blocks of movement, but is actually organized like movement, thereby harnessing our movement-recognition auditory mechanisms. The story this leads to for music is this: Music has been culturally selected to sound like people moving, just the kinds of sounds your auditory system evolved to be great at processing. …and just the kinds of sounds that can possess emotional content that makes music evocative and worth listening to.

Music is evocative because it is made with people, something I also wrote about here [ http://bit.ly/rcKVh ], and something I will discuss further in the future.

Headphoned joggers, then, aren’t merely missing out on the real movement around them – they pipe into their ears a fictional movement, making them even more hazardous than a jogger wearing earplugs.

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).


Read Full Post »

By Mark Changizi   

As a young man I enjoyed listening to a particular series of French instructional programs. I didn’t understand a word, but was nevertheless enthralled. Was it because the sounds of human speech are thrilling? Not really. Speech sounds alone, stripped of their meaning, don’t inspire. We don’t wake up to alarm clocks blaring German speech. We don’t drive to work listening to native spoken Eskimo, and then switch it to the Bushmen Click station during the commercials. Speech sounds don’t give us the chills, and they don’t make us cry – not even French.

why-does-music-make-us-fe_1

But music does emanate from our alarm clocks in the morning, and fill our cars, and give us chills, and make us cry. According to a recent paper by Nidhya Logeswaran and Joydeep Bhattacharya from the University of London, music even affects how we see visual images. In the experiment, 30 subjects were presented with a series of happy or sad musical excerpts. After listening to the snippets, the subjects were shown a photograph of a face. Some people were shown a happy face – the person was smiling – while others were exposed to a sad or neutral facial expression. The participants were then asked to rate the emotional content of the face on a 7-point scale, where 1 mean extremely sad and 7 extremely happy. 

The researchers found that music powerfully influenced the emotional ratings of the faces. Happy music made happy faces seem even happier while sad music exaggerated the melancholy of a frown.  A similar effect was also observed with neutral faces. The simple moral is that the emotions of music are “cross-modal,” and can easily spread from sensory system to another. Now I never sit down to my wife’s meals without first putting on a jolly Sousa march.

Although it probably seems obvious that music can evoke emotions, it is to this day not clear why. Why doesn’t music feel like listening to speech sounds, or animal calls, or garbage disposals? Why is music nice to listen to? Why does music get blessed with a multi-billion dollar industry, whereas there is no market for “easy listening” speech sounds?

In an effort to answer, let’s first ask why I was listening to French instructional programs in the first place. The truth is, I wasn’t just listening. I was watching them on public television. What kept my attention was not the meaningless-to-me speech sounds (I was a slow learner), but the young French actress. Her hair, her smile, her mannerisms, her pout… I digress. The show was a pleasure to watch because of the humans it showed, especially the exhibited expressions and behaviors.

The lion share of emotionally evocative stimuli in the lives of our ancestors would have been from the faces and bodies of other people, and if one finds human artifacts that are highly evocative, it is a good hunch that it looks or sounds human in some way.

…continue reading at Scientific American

Mark Changizi is Professor of Cognitive Science at RPI, the author of The Vision Revolution (Benbella, 2009) and The Brain from 25,000 Feet (Kluwer, 2003).

Read Full Post »

Follow

Get every new post delivered to your Inbox.

Join 58 other followers