Feeds:
Posts
Comments

Posts Tagged ‘language’

The Library Journal has a short review by Cynthia Knight of my book, Harnessed.

Many scientists believe that the human brain’s capacity for language is innate, that the brain is actually “hard-wired” for this higher-level functionality. But theoretical neurobiologist Changizi (director of human cognition, 2AI Labs; The Vision Revolution) brilliantly challenges this view, claiming that language (and music) are neither innate nor instinctual to the brain but evolved culturally to take advantage of what the most ancient aspect of our brain does best: process the sounds of nature. By “sounds of nature,” Changizi does not mean birds chirping or rain falling. His provocative theory is based on the identification of striking similarities between the phoneme level of language and the elemental auditory properties of solid objects and, in the case of music, similarities between the sounds of human movement and the basic elements of music.

Verdict: Although the book is written in a witty, informal style, the science underpinning this theoretical argument (acoustics, phonology, physics) could be somewhat intimidating to the nonspecialist. Still, it will certainly intrigue evolutionary biologists, linguists, and cultural anthropologists and is strongly recommended for libraries that have Changizi’s previous book.

~~~

Mark Changizi is Director of Human Cognition at 2AI, and the author of
Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man and The Vision Revolution.

Read Full Post »

A word is vague if it has borderline cases. Yul Brynner (the lead in “The King and I”) is definitely bald, I am (at the time of this writing) definitely not, and there are many people who seem to be neither. These people are in the “borderline region” of ‘bald’, and this phenomenon is central to vagueness.

Nearly every word in natural language is vague, from ‘person’and ‘coercion’ in ethics, ‘object’ and ‘red’ in physical science, ‘dog’ and ‘male’ in biology, to ‘chair’ and ‘plaid’ in interior decorating.

Vagueness is the rule, not the exception. Pick any natural language word you like, and you will almost surely be able to concoct a case — perhaps an imaginary case — where it is unclear to you whether or not the word applies.

Take ‘book’, for example. “The Bible” is definitely a book, and a light bulb is definitely not. Is a pamphlet a book? If you dipped a book in acid and burned off all the ink, would it still be a book? If I write a book in tiny script on the back of a turtle, is the turtle’s back a book?

We have no idea how to answer such questions. The fact that such questions appear to have no determinate answer is roughly what we mean when we say that ‘book’ is vague.

And vagueness is intimately related to the ancient sorites paradox, where from seemingly true premises that (i) a thousand grains of sand makes a heap, and (ii) if n+1 grains of sand make a heap, then n make a heap, one can derive the false conclusion that one grain of sand makes a heap.

Is vagueness a problem with our language, or our brains?

Or, could it be that vagueness is in some way necessary…

When you or I judge whether or not a word applies to an object, we are (in some abstract sense) running a program in the head.

The job of each of these programs (one for each word) is to output YES when input with an object to which the word applies, and to output NO when input with an object to which the word does not apply.

That sounds simple enough! But why, then, do we have vagueness? With programs like this in our head, we’d always get a clear YES or NO answer.

But it isn’t quite so simple.

Some of these “meaning” programs, when asked about some object, will refuse to respond. Instead of responding with a YES or NO, the program will just keep running on and on, until eventually you must give up on it and conclude that the object does not seem to clearly fit, nor clearly not fit.

Our programs in the head for telling us what words mean have “holes” in them. Our concepts have holes. And when a program for some word fails to respond with an answer — when the hole is “hit” — we see that the concept is actually vague.

Why, though, is it so difficult to have programs in the head that answer YES or NO when input with any object? Why should our programs have these holes?

Holes are an inevitability for us because they are an inevitability for any computing device, us included.

The problem is called the Always-Halting Problem. Some programs have inputs leading them into infinite loops. One doesn’t want one’s program to do that. One wants it to halt, and to do so on every possible input. It would be nice to have a program that sucks in programs and checks to see whether they have an infinite loop inside them. But the Always-Halting Problem states that there can be no such infinite-loop checking program. Checking that it is a program always halts is not generally possible.

That’s why programs have holes in them — because it’s computationally impossible to get rid of them all.

And that’s why our own programs in the head have holes in them. That’s why our concepts have holes, or borderline cases where the concept neither clearly applies nor clearly fails to apply.
Furthermore, notice a second feature of vagueness: Not only is there no clear boundary between where the concept applies and does not, but there are no clear boundaries to the boundary region.

We do not find ourselves saying, “84 grains of sand is a heap, 83 grains is a heap, but 82 grains is neither heap nor non-heap.”

sorites sandpile problem

This facet of vagueness — which is called “higher-order vagueness” — is not only something we have to deal with, but is also something which any computational device must contend with.

If 82 grains is in the borderline region of ‘heap’, then it is not because the program-in-the-head said “Oh, that’s a borderline case.” Rather, it is a borderline case because the program failed to halt at all.

And when something fails to halt, you can’t be sure it won’t halt. Perhaps it will eventually halt, later.

The problem here is called the Halting Problem, a simpler problem than the Always-Halting Problem mentioned earlier. The issue now is simply whether a given program will halt on a given input (whereas the “Always” version concerned whether a given program will halt on everyinput).

And this problem also is not generally solvable by any computational device. When you get to 82 grains from 83, your program in the head doesn’t respond at all, but you don’t know it won’t ever respond.

Your ability to see the boundary of the borderline region is itself fuzzy.

Our concepts not only have holes in them, but unseeable holes. …in the sense that exactly where the borders of the holes are is unclear.

And these aren’t quirks of our brains, but necessary consequences of any computational creature — man or machine — having concepts.

~~~

This originally appeared August 19, 2010, at Science 2.0. (See the comments there for some good discussion.)

~~~

Mark Changizi is Director of Human Cognition at 2AI, and the author of The Vision Revolution (Benbella Books) and the upcoming book Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella Books). He is working on his fourth book at the moment, tentatively titled Making Faces, about emotions and facial expressions.

 

Read Full Post »

A generation ago it was only a brave eclectic minority of psychologists and neuroscientists who dared to address the arts. Things have changed considerably since then. “Art and brain” is now a legitimate and respected target of study, and is approached from a variety of viewpoints, from reductionistic neurophysiology to evolutionary approaches.

Things have changed so quickly that late 20th century conversations about how to create stronger art-science collaborations and connections are dated only a decade later – everyone’s already doing it! And the new generation of students being trained are at home in both the arts and sciences in a way that was rare before.

Although we are all now more culturally comfortable bathing in conversations about art and brain, are we making progress? Has looking into the brain helped us make sense of the arts? Here I will briefly explain why I believe we have made little progress. And then I will propose an alternative route to understanding art and its origins.

Perhaps the most common modus operandi in the cognitive and brain sciences approach to art is (i) to point to some known principle of brain science, and then (ii) to provide examples of art showing conformance with that principle. As fun as it may be to read explanations of art of this kind, the approach suffers from two fundamental difficulties – one on the brain side, one on the arts side.

Let’s start with the “brain” difficulty, which is simply this: we don’t understand the brain. Although the field is jam-packed with fantastically clever experiments giving us fascinating and often valid data, there is usually very little agreement (or ought to be little agreement) about how to distill the data into broad principles. And the broader and higher-level the supposed principle, the more controversial and difficult-to-defend it is. Consequently, most of the supposed principles in the brain sciences remotely rich enough to inform us about the arts are deeply questionable.

If we are so ignorant of the brain, why is the modus operandus above sometimes seemingly able to explain art? There is a lot of art out there, and it comes in a wide variety. Consequently, given any supposed principle from neuroscience or psychology, one can nearly always cherry pick art pieces fitting it. What very few scientific studies do is attempt to quantitatively gauge whether the predicted feature is a general tendency across the arts. The fundamental difficulty on the “arts” side is that we often don’t have a good idea what facets of art are universal tendencies that need to be explained.

These difficulties for the brain and arts make the common modus operandus a poor way to make progress comprehending art and brain. What initially looks like neuroscientific principles being used to explain artistic phenomena is, more commonly, suspect brain principles being used to explain artistic phenomena that may not exist. (A second common approach to linking art and the brain sciences goes in the other direction: to begin with a piece of art, and then to cherry-pick principles from the brain sciences to explain it.)

How, then, should we move forward in our quest to understand the arts? Here I will suggest to you a path, one that addresses the brain and art difficulties above.

The “arts” difficulty can be overcome by identifying regularities actually found in the arts, whether universals, near-universals, or statistical tendencies. One reason large-scale measurements across the arts are not commonly carried out may be that any discipline of the arts tends to be vast and tremendously diverse, and it may seem prima facie unlikely that one will find any interesting regularity. With a strong stomach, however, it is often possible to collect enough data to capture a signal through the noise.

The “arts” difficulty, then, can be addressed by good-old-fashioned data collection, and distillation of empirical regularities. But even so, we are left with another big problem to overcome. “Good-old-fashioned data collection” involves more than simply collecting data. Which data should one collect? And which kinds of regularities should be sought after? Although it is well-known that data helps drive theory, it is not as widely appreciated that theory drives data. There’s effectively infinitely many ways of collecting data, and effectively unlimited ways of analyzing any set of data. Without theory as a guide, one is not likely to identify empirical regularities at all, much less ones that are interesting. Good-old-fashioned theory is required in good-old-fashioned data collection. We need predictions about empirical regularities, and then need to gather data in a manner designed to test the prediction.

But this brings us back to our first difficulty, the “brain” one. If we are so ignorant of the principles of the brain, then how can we hope to use it to make predictions about regularities in art?

We are, indeed, woefully ignorant of the brain, but we can make progress in explaining art. Here is the fundamental insight I believe we need: the arts have been culturally selected over time to be a “good fit” for our brain, and our brain has been naturally selected over time to be a good fit to nature …so, perhaps the arts have come to be shaped like nature, exactly the shape our brain came to be highly efficient at processing. For example, perhaps music has been culturally selected to be structured like some natural class of stimuli, a class of stimuli our auditory system evolved via natural selection to process. (See Figure 1.)

natural selection and cultural selection in shaping the brain

If the arts are as I describe just above – selected to harness our brains by mimicking nature – then we can pursue the origins of art without having to crack open the brain. We can, instead, focus our attention on the regularities found in nature, the regularities which our brains evolved to competently process. I’ll suggest in a moment that we can do exactly this, and give examples where I have been successful at doing so. But first let’s deal with a potential problem…

Don’t brains have quirks? And if so, couldn’t the arts tap into our quirks, and then no analysis of nature would help explain the arts? What do I mean by a quirk? Brains possess mechanisms selected to work well when the inputs to the mechanisms are natural. What happens when the inputs are not natural? That is, what happens when the inputs are of a kind the mechanism was not selected to accommodate? The answer is, “Who knows?!” The mechanism never was selected to accommodate non-natural inputs, and so the mechanism may carry out some arbitrary, inane computation.

To grasp what the mechanism does on these non-natural inputs, we may have no choice but to crack open the hardware and figure out how it actually works. If the arts tended to be culturally selected to tap into the brain’s quirks, then nature wouldn’t help us, and we’d be bound to the brain’s enigmatic details in our grasp of the arts.

There is, however, a good reason to suspect that cultural selection won’t try to harness the brain’s quirks, and the reason is this: quirks are stupid. When your brain mechanisms are running as nature “intended,” they are exceedingly sophisticated machines. When they are run on inputs not in their design specs, however, the behavior of the brain’s mechanisms (now quirks) are typically not intelligent at all. For example, the plastic fork in front of me is well-designed for muffin eating, and although I can comb my hair with it, it is a terribly designed comb. The quirks will usually be embarrassing in their lack of sophistication for any task. …because they weren’t designed for any task. And that’s fundamentally why we expect the arts to have culturally been selected to tap into our functional brain mechanisms, running roughly as nature “intended”.

If we can set aside the quirks, then we can side-step the brain in our attempt to grasp the origins of the arts. If I am correct about this, we can remove the most complicated object in the universe from the art equation!

With the brain put on the shelf, the goal is, instead, to analyze nature, and use it to explain the structure of the arts. Is this really possible? And isn’t nature just as complicated as the brain, or, at any rate, sufficiently complicated that we’re headed for despair?

No. Nature is filled with simple regularities, many of them having physics or mathematical foundations. And although it may not be trivial to discover them, our hopes should be far greater than our hopes for unraveling the brain’s mechanisms. Our presumption, then, is that our brains evolved to “know” these regularities of nature, and if we, as scientists, can unravel the regularities, we have thereby unraveled the brain’s competencies. What regularities from nature am I referring to? For the remainder of this piece, I’ll give you three brief examples from my research. Only one is explictly about the arts, but all three concern the cultural evolution of human artifacts, and how they harness our brains via mimicking nature. (See Figure 2.)

shaping culture to look like nature in cultural selection

The first concerns the origins of writing, and why letters are shaped as they are. Our visual systems evolved for more than a hundred million years to be highly competent at visually processing natural scenes. One of the most central features of these natural scenes was simply this: they are filled with opaque objects strewn about. And that is enough to lead to visual regularities in nature. For example, there are three junction types having two contours – L, T and X. Ls happen at many object corners, Ts when one edge goes behind an object, and these two are accordingly common in natural scenes. X, however, is rare in natural scenes.

Matching nature, letter shapes with L and T topologies are also common across languages, but X topologies rare. More generally, the shapes found more commonly in natural scenes are those found more commonly in writing systems. [See this SB piece for more: http://www.scientificblogging.com/mark_changizi/topography_language ]

The second concerns the origins of speech, and why speech sounds as it does. Our auditory systems evolved for tens of millions of years to be highly efficient at processing natural sounds.

Although nature consists of lots of sounds, one of the most fundamental categories of sound is this: solid-object events. Events among solid objects, it turns out, have rich regularities that one can work out. For starters, there are primarily three kinds of sound among solid objects: hits, slides and rings, the latter occurring as periodic vibrations of objects that have been involved in a physical interaction (namely a hit or a slide). Just as hit, slides and rings are the fundamental atoms of solid-object physical events, speech is built out of hits, slides and rings – called plosives, fricatives and sonorants. For another starter example, just as solid-object events consist of a physical interaction (hit or slide) followed by the resultant ring, the most fundamental simple structure across language is the syllable, most commonly of the CV, or consonant-sonorant form. More generally, and as I describe in my upcoming book, Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (2011), spoken languages share a wide variety of solid-object event signatures.

Written and spoken language look and sound like fundamental aspects of nature: opaque objects strewn about and solid-objects interacting with one another, respectively. Writing thereby harnesses our visual object-recognition mechanisms, and speech harnesses our event-recognition mechanisms. Neither opaque objects nor solid objects are especially evocative sources in nature, and that’s why the look of most writing and the sound of most speech is not evocative. [See this SciAm piece for more: http://www.scientificamerican.com/article.cfm?id=why-does-music-make-us-fe ]

Music – the third cultural production I have addressed with a nature-harnessing approach – is astoundingly evocative. What kind of story could I give here? A nature-harnessing theory would have to posit a class of natural auditory stimuli that music has culturally evolved to mimic, but haven’t I already dealt with nature’s sounds in my story for speech? In addition to general event recognition systems, we probably possess auditory mechanisms specifically designed for the recognition of human behavior. Human gait, I have argued, has signature patterns found in the regularities of rhythm. Doppler shifts of movers have regularities that one can work out, and these regularities are found in music’s melodic contours. And loudness modulations due to proximity predict how loudness is used in music.

These results are described in my upcoming book, Harnessed. For example, just as faster movers have a greater range of pitches from their directed-toward-you high pitch to their directed-away-from-you low pitch, faster tempo music tends to use a wider range of pitches for its melody. [See this SB piece for more: http://www.scientificblogging.com/mark_changizi/music_sounds_moving_people ]

structure of nature harnessing arguments for speech writing  and music

Many other aspects of the arts are potentially treatable in a similar fashion. For example, color vision, I have argued is optimized for detecting subtle spectral shifts in other people’s skin, indicating modulations in their emotion, mood or state. That is, color vision is a sense designed for the emotions of other people, and it is possible to understand the meanings of colors on this basis, e.g., red is strong because oxygenated hemoglobin is required for skin to display it. The visual arts are expected to have harnessed our brain’s color mechanisms via using colors as found in nature, namely principally as found on skin. Again, the strategy is to understand art without having to unravel the brain’s mechanisms.

One of the morals I want to convey is that you don’t have to be a neuroscientist to take a brain-based approach to art. The brain’s competencies can be ferreted out without going inside, by carving nature at its joints, just the joints the brain evolved to carve at. One can then search for signs of nature in the structure of the arts. My hope is that via the progress I have made for writing, speech and music, others will be motivated to take up the strategy for grappling with all facets of the arts, and cultural artifacts more generally.

This first appeared on March 4, 2010, as a feature at ScientificBlogging.com.

=============

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).

Read Full Post »

Originally a piece in ScientificBlogging, September 28, 2009…

You open your dictionary to figure out what your friend meant by ‘nasute,’ only to find that the definition is “A wittol, or jemadar; bannocked in an emunctory fashion.” What good is this dictionary, you wonder, if it only refers me to other words I don’t know? And worse still, the definitions of some of these words refer back to ‘nasute,’ the word you didn’t know in the first place! Even if your attempt to learn what ‘nasute’ means is not infected by circularity, you face a quick explosion of words to look up: the words in the definition, the words in each of these definitions, and so on. The dictionary appears, then, to be a terribly messy tangled web.

In reality, however, dictionaries aren’t quite that worthless. …and the definition of ‘nasute’ above is, thankfully, fiction. The standard line for why dictionaries are useful is that the typical users of dictionaries already know the meanings of much of the dictionary, and so a disorderly dictionary definition doesn’t send them on an exploding wild goose chase.

Dictionaries would, however, be only a source of frustration for a person not knowing any of the vocabulary. And, therefore, dictionaries – and the lexicon of language they attempt to record – can’t be our mental lexicon. If a word is in our mental lexicon, then we know what it means. And if we know what it means, then our brain is able to unpack its meaning in terms it understands. The brain is not sent on a wild goose chase like the one fated for a Zulu native handed the Oxford English Dictionary.

Compared to the disheveled dictionary, the mental lexicon is much more carefully designed. The mental lexicon is hierarchical, having at its foundation a small number – perhaps around 50 – of “semantic primes” (or fundamental atoms of meaning) that are combined to mentally define all our mental concepts, something the linguist Anna Wierzbicka has argued. And our internal lexicon has a number of hierarchical levels, analogous to the multiple levels in the visual hierarchy or auditory hierarchy.

The “visual meaning” of a complex concept – e.g., the look of a cow – gets built out of a large combination of fundamental visual atoms, e.g., oriented contours and colored patches. In the same way, the (semantic) meaning of the concept of a cow gets built out of a large combination of fundamental semantic atoms, e.g., words like ‘you’, ‘body’, ‘some’, ‘good’, ‘want’, ‘now’, ‘here’, ‘above’, ‘maybe’, and ‘more’. In both sensory and semantic hierarchies, the small number of bottom level primes are combined to build a larger set of more complex things, and these, in turn, are used to build more complex ones, and so on. For vision, sample objects at increasingly more complex levels include contours, junctions, junction-combinations, and objects. For the lexicon, examples of increasing complexity are ‘object’, ‘living thing’, ‘animal’, ‘vertebrate’, ‘mammal’, ‘artiodactyl’, and ‘cow’.

In our natural state, the mental lexicon we end up with would depend upon our experiences. No rabbits in your locale, no lexical entry in the head for rabbits. And the same is true for vision. The neural hierarchy was designed to be somewhat flexible, but designed to do its lexical work hierarchically, in an efficient fashion, no matter the specific lexicon that would fill it.

There is quite a difference, then, between the disordered, knotted dictionary and our orderly, heavily optimized, hierarchical mental lexicon. Language’s vocabulary determined by cultural selection – and whose structure is partially measured by dictionaries – does not seem to have harnessed the lexical expectations of our brain.

However, are dictionaries really so tangled? Back in 2005 while working at Caltech, I began to wonder. Dictionaries surely do have some messiness, because they’re built by real people from real messy data about the use of words: so some circularities may occasionally get thrown in by accident. But my bet was that the signature, efficient hierarchical structure of our inner lexicon should be in the dictionary, if only we looked carefully for it. Language would work best if the public vocabulary were organized in such a way that it would naturally fit the shape of our lexical brain, and I suspected cultural selection over time should have figured this out. …that it should have given us a dictionary shaped like the brain: a braintionary.

So I set out on a search for these signature hierarchical structures in the dictionary. A search to find the hidden brain in the dictionary. In particular, I asked whether the dictionary is hierarchically organized in such a way that it minimizes the total size of needed to define everything it must.

To grasp the problem, a starting point is to realize that there is more than one way to build a hierarchical dictionary. One could use the most fundamental words to define all the other words in the dictionary, so that there would be just two hierarchical levels: the small set of fundamental (or atomic) words, and the set of everything else. Alternatively, dictionaries could use the most fundamental words to define an intermediate level of words, and in turn use these words to define the rest. That would make three levels, and, clearly, greater numbers of levels are possible.

My main theoretical observation was that having just the right number of hierarchical levels can greatly reduce the overall size of the dictionary. A dictionary with just two hierarchical levels, for example, would have to be more than three times larger than an optimal one that uses around seven levels.

Via measurements from and analysis of WordNet and the Oxford English Dictionary, in a paper I published in the Journal of Cognitive Systems Research I provided evidence that actual dictionaries have approximately the optimal number of hierarchical levels. I discovered that dictionaries do have the structure expected for the brain’s lexical hierarchy. Dictionaries are braintionaries, designed by culture to have the structure our brains like, maximizing our vocabulary capabilities for our fixed ape brains.

What it means is that language has culturally evolved over the centuries and millenia not only to have the words we need, but also to have an overall organization—in terms of how words get their meanings from other words—that helps minimize the overall size of the dictionary. …and simultaneously helps us efficiently encode the lexicon in our heads.

The journal article itself can be linked here.

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).

Read Full Post »

Working now on my third book, called HARNESSED: How Language and Music Mimicked Nature and Transformed Ape to Man. Here is the short overview…

If one of our non-speaking ancestors were found frozen in a glacier and revived, we imagine that he would find our world jarringly alien. The concrete, the cars, the clothes, the constant jabbering – it’s enough to make a hominid jump into the nearest freezer and hope to be reawoken after the apocalypse. But would modernity really seem so frightening to our guest? Although cities and savannas would appear to have little in common, might there be deep similarities? Could civilization have retained vestiges of nature, easing our ancestor’s transition?

Although we were born into civilization rather than thawed into it, from an evolutionary point of view we’re an uncivilized beast dropped into cultured society. We prefer nature as much as the next hominid, in the sense that our brains work best when their computationally sophisticated mechanisms can be applied as evolutionarily intended. One might, then, expect that civilization will have been shaped over time to possess signature features of nature, thereby squeezing every drop of evolution’s genius for use in the modern world.

Does civilization mimic nature? In his new book, HARNESSED, Mark Changizi argues that the most fundamental pillars of humankind are thoroughly infused with signs of the ancestral world. Those pillars are language and music. Cultural evolution over time has led to language and music designed as a simulacra of nature, so that they can be nearly effortlessly utilized by our ancient brains. Languages have evolved so that words look like natural objects when written and sound like natural events when spoken. And music has come to have the signature auditory patterns of people moving in one’s midst.

But if the key to our human specialness rests upon powers likely found in our non-linguistic hominid ancestors, then it suggests we are our non-linguistic hominid ancestors. Our thawed ancestors may do just fine here because our language would harness their brain as well. Rather than jumping into a freezer, our long-lost relative may choose instead to enter engineering school and invent the next generation of refrigerator. The origins of language and music may be attributable not to brains having evolved language or music instincts, but, rather, to language and music having culturally evolved brain instincts. Language and music shaped themselves over many thousands of years to be tailored for our brains, and because our brains were cut for nature, language and music mimicked nature. …transforming ape to man.

Mark Changizi is Professor of Cognitive Science at RPI, and the author of The Vision Revolution (Benbella, 2009) and The Brain from 25000 Feet (Kluwer, 2003).

[See related pieces on music in ScienceDaily and Scientific American.]

Read Full Post »

Follow

Get every new post delivered to your Inbox.

Join 58 other followers