Feeds:
Posts
Comments

Posts Tagged ‘Harnessed’

harnessed_korea

My most recent book, Harnessed, has now appeared in Korean translation, with tireless translator Seung Young Noh. For more info about the book, here’s a start: a review by Nobel laureate.

~~~

Mark Changizi is Director of Human Cognition at 2AI, a managing director of O2Amp, and the author of HARNESSED: How Language and Music Mimicked Nature and Transformed Ape to Man and THE VISION REVOLUTION. He is finishing up his new book, HUMAN 3.0, a novel about our human future, and working on his next non-fiction book, FORCE OF EMOTION.

Read Full Post »

New Scientist's Top Ten Science Books in 2011, Harnessed is on the right

I’m excited that my new book, Harnessed, is among New Scientist’s top ten science books of 2011, standing aside other authors I admire.

In the book I describe (and present a large battery of new evidence for) my radical new theory for how humans came to have language and music. They’re not instincts (i.e., we didn’t evolve them via natural selection), and they’re not something we merely learn. Instead, speech and music have themselves culturally evolved to fit us (not a new idea) by mimicking fundamental aspects of nature (my idea). Namely speech came to sound like physical events among solid objects, and music came to sound like humans moving and behaving in your midst (that’s why music is evocative). Each of these artifacts thereby came to harness an instinct we apes already possessed, namely auditory object-event recognition and auditory human-movement recognition mechanisms.

The story for how we came to have speech and music is, then, analogous to how we came to have writing, something we know we didn’t evolve. Writing, I’ve argued (in The Vision Revolution), culturally evolved to possess the signature shapes found in nature (and specifically in 3D scenes with opaque objects), and thereby harnessed our visual object-recognition system.

Buy the book here.

~~~

Mark Changizi is Director of Human Cognition at 2AI, and the author of
Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man and The Vision Revolution. He is finishing up his new book, HUMAN, a novel about our human future.

Read Full Post »

I’ve argued there’s no imminent singularity, and I’ve thrown water on the idea that the web will become smart or self-aware. But am I just a wet blanket, or do I have a positive vision of our human future?

I have just written up a short “manifesto” of sorts about where we humans are headed, and it appeared in Seed Magazine. It serves not only as guidepost to our long-term future, but also one for how to create better technologies for our brains (part of the aim of the research institute, 2ai, I co-direct with colleague Tim Barber).

~~~

Mark Changizi is Director of Human Cognition at 2AI, and the author of The Vision Revolution (Benbella Books, 2009) and the upcoming book Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella Books, 2011).

Read Full Post »

I believe that music sounds like people, moving. Yes, the idea may sound a bit crazy, but it’s an old idea, much discussed in the 20th century, and going all the way back to the Greeks. There are lots of things going for the theory, including that it helps us explain…

(1) why our brains are so good at absorbing music (…because we evolved to possess human-movement-detecting auditory mechanisms),

(2) why music emotionally moves us (…because human movement is often expressive of the mover’s mood or state), and

(3) why music gets us moving (…because we’re a social species prone to social contagion).

And as I describe in detail in my upcoming book — Harnessed: How Language and Music Mimicked Nature and Transformed Ape To Man — music has the signature auditory patterns of human movement (something I hint at in this older piece of mine).

Here I’d like to describe a novel way of thinking about what the meaning of music might be. Rather than dwelling on the sound of music, I’d like to focus on the look of music. In particular, what does our brain think music looks like?

It is natural to assume that the visual information streaming into our eyes determines the visual perceptions we end up with, and that the auditory information entering our ears determines the events we hear.

But the brain is more complicated than this. Visual and auditory information interact in the brain, and the brain utilizes both to guess the single scene to render a perception of. For example, the research of Ladan Shams, Yukiyasu Kamitani and Shinsuke Shimojo at Caltech have shown that we perceive a single flash as a double flash if it is paired with a double beep. And Robert Sekuler and others from Brandeis University have shown that if a sound occurs at the time when two balls pass through each other on screen, the balls are instead perceived to have collided and reversed direction.

These and other results of this kind demonstrate the interconnectedness of visual and auditory information in our brain. Visual ambiguity can be reduced with auditory information, and vice versa. And, generally, both are brought to bear in the brain’s attempt to infer the best guess about what’s out there.

Your brain does not, then, consist of independent visual and auditory systems, with separate troves of visual and auditory “knowledge” about the world. Instead, vision and audition talk to one another, and there are regions of cortex responsible for making vision and audition fit one another.

These regions know about the sounds of looks and the looks of sounds.

Because of this, when your brain hears something but cannot see it, your brain does not just sit by and refrain from guessing what it might have looked like.

When your auditory system makes sense of something, it will have a tendency to activate visual areas, eliciting imagery of its best guess as to the appearance of the stuff making the sound.

For example, the sound of your neighbor’s rustling tree may spring to mind an image of its swaying lanky branches. The whine of your cat heard far way may evoke an image of it stuck up high in that tree. And the pumping of your neighbor’s kid’s BB gun can bring forth an image of the gun being pointed at Foofy way up there.

Your visual system has, then, strong opinions about the proper look of the things it hears.

And, bringing ourselves back to music, we can use the visual system’s strong opinions as a means for gauging music’s meaning.

In particular, we can ask your visual system what it thinks the appropriate visual is for music.

If, for example, the visual system responds to music with images of beating hearts, then it would suggest, to my disbelief, that music mimics the sounds of heartbeats. If, instead, the visual system responds with images of pornography, then it would suggest that music sounds like sex. You get the idea.

But in order to get the visual system to act like an oracle, we need to get it to speak. How are we to know what the visual system thinks music looks like?

One approach is to simply ask which visuals are, in fact, associated with music? For example, when people create imagery of musical notes, what does it look like? One cheap way to look into this is simply to do a Google (or any search engine) image search on the term “musical notes.” You might think such a search would merely return images of simple notes on the page.

However, that is not what one finds. To my surprise, actually, most of the images are like the one in the nearby figure, with notes drawn in such a way that they appear to be moving through space.

Notes in musical notation never actually look anything like this, and real musical notes have no look at all (because they are sounds). And yet we humans seem to be prone to visually depicting notes as moving all about.

music, movement, notes 

Music tends to be depicted as moving.

Could these images of notes in motion be due to a more mundane association?

Music is played by people, and people have to move in order to play their instrument. Could this be the source of the movement-music association? I don’t think so, because the movement suggested in these images of notes doesn’t look like an instrument being played. In fact, it is common to show images of an instrument with the notes beginning their movement through space from the instrument: these notes are on their way somewhere, not an indication of the musician’s key-pressing or back-and-forth movements.

Could it be that the musical notes are depicted as moving through space because sound waves move through space? The difficulty with this hypothesis is that all sound moves through space. All sound would, if this were the case, be visually rendered as moving through space, but that’s not the case. For example, speech is not usually visually rendered as moving through space. Another difficulty is that the musical notes are usually meandering in these images, but sound waves are not meandering — sound waves go straight. A third problem with sound waves underlying the visual metaphor is that we never see sound waves in the first place.

Another possible counter-hypothesis is that the depiction of visual movement in the images of musical notes is because all auditory stimuli are caused by underlying events with movement of some kind. The first difficulty, as was the case for sound waves, is that it is not the case that all sound is visually rendered in motion. The second difficulty is that, while it is true that sounds typically require movement of some kind, it need not be movement of the entire object through space. Moving parts within the object may make the noise, without the object going anywhere. In fact, the three examples I gave earlier — leaves rustling, Foofy whining, and the BB gun pumping — are noises without any bulk movement of the object (the tree, Foofy, and the BB gun, respectively). The musical notes in imagery, on the other hand, really do seem to be moving, in bulk, across space.

Music is like tree-rustling, Foofy, BB guns and human speech in that it is not made via bulk movement through space. And yet music appears to be unique in this tendency to be visually depicted as moving through space.

In addition, not only are musical notes rendered as in motion, musical notes tend to be depected as meandering.

When visually rendered, music looks alive and in motion (often along the ground), just what one might expect if music’s secret is that it sounds like people moving.

A Google Image search on “musical notes” is one means by which one may attempt to discern what the visual system thinks music looks like, but another is to simply ask ourselves what is the most common visual display shown during music. That is, if people were to put videos to music, what would the videos tend to look like?

Lucky for us, people do put videos to music! They’re called music videos, of course. And what do they look like?

The answer is so obvious that it hardly seems worth noting: music videos tend to show people moving about, usually in a time-locked fashion to the music, very often dancing.

As obvious as it is that music videos typically show people moving, we must remember to ask ourselves why music isn’t typically visually associated with something very different. Why aren’t music videos mostly of rivers, avalanches, car races, wind-blown grass, lion hunts, fire, or bouncing balls?

It is because, I am suggesting, our brain thinks that humans moving about is what music should look like…because it thinks that humans moving about is what music sounds like.

Musical notes are rendered as meandering through space. Music videos are built largely from people moving, and in a time-locked manner to the music. That’s beginning to suggest that the visual system is under the impression that music sounds like human movement.

But if that’s really what the visual system thinks, then it should have more opinions than simply that music sounds like movement. It should have opinions about what, more exactly, the movement should look like.

Do our visual systems have opinions this precise? Are we picky about the mover that’s put to music?

You bet we are! That’s choreography. It’s not enough to play a video of the Nutcracker ballet during Beatles music, nor will it suffice to play a video of the Nutcracker to the music of Nutcracker, but with a small time lag between them. The video of human movement has to have all the right moves at the right time to be the right fit for the music.

These strong opinions about what music looks like make perfect sense if music mimics human movement sounds. In real life, when people carry out complex behaviors, their visual movements are tightly choreographed with the sounds – because the sight and sound are due to the same event. When you hear movement, you expect to see that same movement. Music sounds to your brain like human movement, which is why when your brain hears music, it expects that any visual of it should be consistent with it.

~~~~~~

This was adapted from Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella Books, 2011). It first appeared July 26, 2010, at Psychology Today.

Mark Changizi is Professor of Human Cognition at 2ai, and author of The Vision Revolution.

Read Full Post »

It is my pleasure to announce that my upcoming book, HARNESSED (Benbella, 2011) can now be pre-ordered at Amazon!

It is about how we came to have language and music. …about how we became modern humans. See http://changizi.wordpress.com/book-harnessed/ for more about the book.

~~~

Mark Changizi is Professor of Human Cognition at 2AI, and the author of The Vision Revolution (Benbella Books) and the upcoming book Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella Books).

Read Full Post »

Daniel Lende from PLoS Blogs’ Neuroanthropology recently interviewed me about the relationship between culture, brain and nature, and the origins of language. See the interview here.

In my view, anthropology — and evolution and culture — are crucial to understanding neuroscience and our origins. …and so their “Neuroanthropology” blog (also by Greg Downey) will be one I follow closely.

Read Full Post »

As seen in classified ads…

Have a talent and enjoyment for inflicting prescribed doses of pain? Your dream job awaits. (Biology undergraduate required.) Contact: 555-8428
~~~~~
You are not supposed to be reading this. You’re an ape who never evolved to read, but you can do so because writing culturally evolved to be shaped just right for your illiterate visual system. As I have argued in my research and recent books, culture’s trick for getting writing into us was to harness our ancient visual system for a new purpose (The Vision Revolution), a trick also used for speech and music (upcoming in Harnessed). (Hint: The trick to harnessing is, in each case, to mimic nature.)

This “harnessing” strategy is just the tip of the iceberg – our modern civilization is, in myriad ways, shaped to fit our fundamentally uncivilized selves. Culture has given us clothes that fit our body shapes, color patterns that fit our innate color senses, lexicons that fit our brains, religions that fit our aspirations, and chairs that fit our butts.

But there is one blaring gap in how we have been harnessed for modernity, a gap that, if addressed, would lead to a revolution in safety and well-being for humankind.

What’s missing is pain.

Pain is crucial, of course, because it keeps us safe, and prevents us from engaging in acts that injure or slice off parts of ourselves. Although wishing for a world without pain sounds initially alluring, one quickly realizes that such a world would be hell – it would be a world of the walking bruised and hideously injured (unless you’re into that). Those who lack pain don’t last long. And even if they avoid catching on fire or bleeding to death, they often succumb to death by a thousand pricks (e.g., they don’t shift their body weight as the rest of us do when they sit too long in one position, and this leads over time to circulatory damage).

Pain is designed to be elicited before injury actually occurs, with the hope that it prevents injury altogether. (E.g., see Why Does Light Make Headaches Worse?) Pain is evolutionarily designed to cause us to say, “Ouch!”, rather than, “Darn, I needed that appendage!”

More importantly for our purposes here, pain is rigged to be elicited in scenarios that would have been dangerous for our ancestors out in nature. A great example of what happens to animals who encounter injurious situations they have no pain mechanisms to deter them from is when natural gas accumulates in low spots. One animal gets there and dies. Another animal sees an easy meal, and also dies. Soon there are many dozens of dead animals there, lured to their death, with life-snuffing injuries sneaking up on them without the benefit of warning pain.

And there’s your problem! We no longer live in the nature that shaped our bodies and brains, and the dangerous scenarios we now face aren’t the same as those our ancestors faced. Electricity, ban saws, nail guns, stove tops, toasters perched next to bathtubs, and countless other modern dangers exist today, dangers that we’re not designed to have safety-ensuring pain to protect us from (until it’s too late).

What we need are technologies that inflict “smart pain,” pain not only designed to go off at signs of modern dangers, but designed to be painful in the right way, on the right body part, so as to optimally alert us to the acute danger.

Just to throw out a few examples…

  • Your car rigged to shock you on your left or right side if drive your car within several inches of an obstacle on your car’s left or right, respectively.
  • Your computer set to shine a painfully bright red light if you are about to click on a suspicious link.
  • A wearable device with a video sensor that detects the likelihood that the person you’re picking up at a bar has an STD, and then causes severe itching until you flee the bar.

You’re beginning to get the idea, and I hope you can see that the ideas are endless. What I would like to see are your own suggestions for the future of pain engineering, and to a world where all sadists are employed.

This first appeared on May 6, 2010, as a feature at bodyinmind.au

=============

Mark Changizi is Professor of Human Cognition at 2AI, and the author of The Vision Revolution (Benbella Books) and the upcoming book Harnessed (Benbella Books).

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

Join 58 other followers