Feeds:
Posts
Comments

Posts Tagged ‘Brain’

I’ve argued there’s no imminent singularity, and I’ve thrown water on the idea that the web will become smart or self-aware. But am I just a wet blanket, or do I have a positive vision of our human future?

I have just written up a short “manifesto” of sorts about where we humans are headed, and it appeared in Seed Magazine. It serves not only as guidepost to our long-term future, but also one for how to create better technologies for our brains (part of the aim of the research institute, 2ai, I co-direct with colleague Tim Barber).

~~~

Mark Changizi is Director of Human Cognition at 2AI, and the author of The Vision Revolution (Benbella Books, 2009) and the upcoming book Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella Books, 2011).

Advertisements

Read Full Post »

Jeremy Hsu recently interviewed me for a piece in LiveScience (picked up at MSNBC and Yahoo) about brain evolution and its relationship to the prospects for AI.  His piece was in reaction to a Brain, Behavior and Evolution piece I wrote, and also to the following featured SB piece I wrote…

===

There are currently two ambitious projects straddling artificial intelligence and neuroscience, each with the aim of building big brains that work. One is The Blue Brain Project, and it describes its aim in the following one-liner:

“The Blue Brain Project is the first comprehensive attempt to reverse-engineer the mammalian brain, in order to understand brain function and dysfunction through detailed simulations.”

The second is a multi-institution IBM-centered project called SyNAPSE, a press release which describes it as follows:

“In an unprecedented undertaking, IBM Research and five leading universities are partnering to create computing systems that are expected to simulate and emulate the brain’s abilities for sensation, perception, action, interaction and cognition while rivaling its low power consumption and compact size.”

Oh, is that all!

The difficulties ahead of these groups are staggering, as they (surely) realize. But rather than discussing the many roadblocks likely to derail them, I want to focus on one way in which they are perhaps making things too difficult for themselves.

In particular, each aims to build a BIG brain, and I want to suggest here that perhaps they can get the intelligence they’re looking for without the BIG.

Why not go big? Because bigger brains are a pain in the neck, and not just for the necks that hold them up. As brains enlarge across species, they must modify their organization in radical ways in order to maintain their required interconnectedness. Convolutedness increases, number of cortical areas increases, number of synapses per neuron increases, white-to-gray matter ratio rises, and many other changes occur in order to accommodate the larger size. Building a bigger brain is an engineering nightmare, a nightmare you can see in the ridiculously complicated appearance of the dolphin brain relative to that of the shrew brain below – the complexity you see in that dolphin brain is due almost entirely to the “scaling backbends” it must do to connect itself up in an efficient manner despite its large size. (See http://www.changizi.com/changizi_lab.html#neocortex )

dolphin brain size  versus shrew brain

If the only way to get smarter brains was to build bigger brains, then these AI projects would have no choice but to embark upon a pain-in-the-neck mission. But bigger brains are not the only way to get smarter brains. Although for any fixed technology, bigger computers are typically smarter, this is not the case for brains. The best predictor of a mammal’s intelligence tends not to be its brain size, but its relative brain size. In particular, the best predictor of intelligence tends to be something called the encephalization quotient (a variant of a brain-body ratio), which quantifies how big the brain is once one has corrected for the size of the body in which it sits. The reason brain size is not a good predictor of intelligence is that the principal driver of brain size is body size, not intelligence at all. And we don’t know why. (See my ScientificBlogging piece on brain size, Why Doesn’t Size Matter…for The Brain?)

This opens up an alternative route to making an animal smarter. If it is brain-body ratio that best correlates with intelligence, then there are two polar opposite ways to increase this ratio. The first is to raise the numerator, i.e., to increase brain size while holding body size fixed, as the vertical arrow indicates in the figure below. That’s essentially what the Blue Brain and SyNAPSE projects are implicitly trying to do.

But there is a second way to increase intelligence: one can raise the brain-body ratio by lowering the denominator, i.e., by decreasing the size of the body, as shown by the horizontal arrow in the figure below. (In each case, the arrow shifts to a point that is at a greater vertical distance from the best-fit line below it, indicating its raised brain-body ratio.)

brain weight best fit  line primates

Rather than making a bigger brain, we can give the animal a smaller body! Either way, brain-body ratio rises, as potentially does the intelligence that the brain-body combo can support.

We’re not in a position today to understand the specific mechanisms that differ in the brains of varying size due to body size, so we cannot simply shrink the body and get a smarter beast. But, then again, we also don’t understand the specific mechanisms that differ in the brains of varying size! Building smarter via building larger brains is just as much a mystery as the prescription I am suggesting: to build smarter via building smaller bodies. And mine has the advantage that it avoids the engineering scaling nightmare for large brains.

For AI to actually somehow take this advice, though, they have to answer the following question: What counts as a body for these AI brains in the first place? Only after one becomes clear on what their bodies actually are (i.e., what size body the brains are being designed to support) can one begin to ask how to get by with less of it, and hope to eke out greater intelligence with less brain.

Perhaps this is the ace in the AI hole: perhaps AI researchers have greater freedom to shrink body size in ways nature could not, and thereby grow greater intelligence. Perhaps the AI devices that someday take over and enslave us will have mouse brains with fly bodies. I sure hope I’m there to see that.

This first appeared on March 9, 2010, as a feature at ScientificBlogging.com.

=============

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).

Read Full Post »

I was recently interviewed about brain and city evolution by “Gladelic: A Quarterly Magazine of Intuitive Intelligence”. Here’s the beginning…

Mark, why have you chosen to focus your study on the neocortex (its importance in the aspect of human evolution) how did you come to comparing cities and brains?

Despite the brain’s complexity, our gray matter is essentially a surface, albeit convoluted in our case. Bigger brains expand the surface area of the gray matter, with only a meager increase in the thickness of gray matter. Cities, too, are surfaces, because they lie on the Earth. Our cortex has ample white matter wiring that “leaps” out of the gray matter to faraway parts of the brain, and these long-range connections are crucial to keeping the entire cortex closely connected. For cities, highways are the white matter axons, leaving the surface streets to efficiently connect to faraway spots. I began to follow these leads, and to flesh out further analogies: synapses and highway exits; axon wire thickness and number of highway lanes; axon propagation velocity and average across-city transit speed.

I found similar scaling laws governing how network properties increase as surface area increases. For example, in each case, the number of conduits (highways and white matter axons) increases as surface area to approximately the 3/4 power, and the total number of “leaves” (exits and synapses) increases as surface area to approximately the 9/8 power.

Despite the radically different kind of network, they are in some respects similar enough, and each has been under selection pressure over time to become more efficiently wired. The selection pressure for brains was, of course, natural selection, which involved lots and lots of being eaten. And the selection pressure for cities was teems of political decisions over many decades to steer a city to work better and better as it grew.

The rest of the interview is at Gladelic (half way down). More about my city-brain research can be found here: https://changizi.wordpress.com/category/cities-shaped-like-brains/

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).

Read Full Post »

Benchfly’s Alan Marnett hit me with an in-depth interview Dec 16, 2009. In addition to getting into the science, the nice thing about the interview was the opportunity to talk about different ways of being a scientist. As you’ll see, I suggest being an aloof son-of-a-bitch, something I also talk about in this piece titled “How Not to Get Absorbed in Someone Else’s Abdomen“.

—————————————

As research scientists, many of us spend a very large amount of time working on a very small subject.  In fact, it’s not unusual for a biochemist to go through their entire career without ever physically observing the protein or pathway they work on.  As we hyper-focus on our own niche of science, we run the risk of forgetting to take the blinders off to see where our slice of work fits in to the rest of the pie.

Changizi

For Dr. Mark Changizi, assistant professor and author of The Vision Revolution, science starts with the pie.  We spoke with Dr. Changizi about why losing focus on the big picture can hurt our research, how autistic savants show us the real capacity of the brain and what humans will look like a million years from now.

BenchFly: Your book presents theories on questions ranging from why our eyes face forward to why we see in color.  Big questions.  As a kid, was it your attraction to the big questions that drew you into science?

Mark Changizi: I sometimes distinguish between two motivations for going into science. First there’s the “radio kid,” the one who takes apart the radio, is always fascinated with how things work, and is especially interested in “getting in there” and manipulating the world. And then there’s the “Carl Sagan kid,” the one motivated by the romantic what-does-it-all-mean questions. The beauty of Sagan’s Cosmos series is that he packaged science in such a way that it fills the more “religious” parts of one’s brain. You tap into that in a kid’s mind, and you can motivate them in a much more robust way than you can from a here’s-how-things-work motivation. I’m a Carl Sagan kid, and was specifically further spurred on by Sagan’s Cosmos. As long as I can remember, my stated goal in life has been to “answer the questions to the universe.”

While that aim has stayed constant, my views on what counts as “the questions to the universe” have changed. As a kid, cosmology and particle physics were where I thought the biggest questions lied. But later I reasoned that there were even more fundamental questions; even if physics were different than what we have in our universe, math would be the same. In particular, I became fascinated with mathematical logic and the undecidability results, the area of my dissertation. With those results, one can often make interesting claims about the ultimate limits on thinking machines. But it is not just math that is more fundamental than physics – that math is more fundamental than physics is obvious. In a universe without our physics, the emergent principles governing complex organisms and evolving systems may still be the same as those found in our universe. Even economic and political principles, in this light, may be deeper than physics: five-dimensional aliens floating in goo in a universe with quite different physics may still have limited resources, and may end up with the same economic and political principles we fuss over.

So perhaps that goes some way to explaining my research interests.

Tell us a little about both the scientific and thought processes when tackling questions that are very difficult to actually prove beyond a shadow of a doubt.

This is science we’re talking about, of course, not math, so nothing in science is proven in the strong mathematical sense. It is all about data supporting one’s hypothesis, and all about the parsimonious nature of the hypothesis.  Parsimony aims for explaining the greatest range of data with the simplest amount of theory. That’s what I aim for.

But it can, indeed, be difficult to find data for the kinds of questions I am interested in, because they often make predictions about a large swathe of data nobody has. That’s why I typically have to generate 50 to 100 ideas in my research notes before I find one that’s not only a good idea, but one for which I can find data to test it. You can’t go around writing papers without new data to test it. If you want to be a theorist, then not only can you not afford to spend the time to become an experimentalist to test your question, but most of your questions may not be testable by any set of experiments you could hope to do in a reasonable period of time. Often it requires pooling together data from across an entire literature.

In basic research we are often hyper-focused on the details.  To understand a complex problem, we start very simple and then assume we will eventually be able to assemble the disparate parts into a single, clear picture.  In essence, you think about problems in the opposite direction- asking the big questions up front.  Describe the philosophical difference between the two approaches, as well as their relationship in the process of discovery.

A lot of people believe that by going straight to the parts – to the mechanism – they can eventually come to understand the organism. The problem is that the mechanisms in biology were selected to do stuff, to carry out certain functions. The mechanisms can only be understood as mechanisms that implement certain functions. That’s what it means to understand a mechanism: one must say how the physical material manages to carry out a certain set of functional capabilities.

And that means one must get into the business of building and testing hypotheses about what the mechanism is for. Why did that mechanism evolve in the first place? There is a certain “reductive” strain within the biological and brain sciences that believes that science has no role for getting into questions of “why”. That’s “just so story” stuff.  Although there’s plenty of just-so-stories – i.e., bad science – in the study of the design and function of biological structure, it by no means needs to be. It can be good science, just like any other area of science. One just needs to make testable hypotheses, and then go test it. And it is not appreciated how often reductive types themselves are in the business of just-so-stories; e.g., computational simulators are concerned just with the mechanisms and often eschew worrying about the functional level, but then allow themselves a dozen or more free parameters in their simulation to fit the data.

So, you have got to attack the functional level in order to understand organisms, and you really need to do that before, or at least in parallel with, the study of the mechanisms.

But in order to understand the functional level, one must go beyond the organism itself, to the environment in which the animal evolved. One needs to devise and test hypotheses about what the biological structure was selected for, and must often refer to the world. One can’t just stay inside the meat to understand the meat.

Looking just at the mechanisms is not only not sufficient, but will tend to lead to futility. An organism’s mechanisms were selected to function only when the “inputs” were the natural ones the organism would have encountered. But when you present a mechanism with an utterly unnatural input, the meat doesn’t output, “Sorry, that’s not an ecologically appropriate input.” (In fact, there are results in theoretical computer science saying that it wouldn’t be generally possible to have a mechanism capable of having such a response.) Instead, the mechanism does something. If you’re studying the mechanism without an appreciation for what it’s for, you’ll have teems and teems of mechanistic reactions that are irrelevant to what it is designed for, but you won’t know it.

The example I often use is the stapler. Drop a stapler into a primitive tribe, and imagine what they do to it. Having no idea what it’s for, they manage to push and pull its mechanisms in all sorts of irrelevant ways. They might spend years, say, carefully studying the mechanisms underlying why it falls as it does when dropped from a tree, or how it functions as crude numchucks. There are literally infinitely many aspects of the stapler mechanism that could be experimented upon, but only a small fraction are relevant to the stapler’s function, which is to fasten paper together.

In explaining why we see in color, you suggest that it allows us to detect the subtleties of complex emotions expressed by humans – such as blushing.  Does this mean colorblind men actually have a legitimate excuse for not understanding women?!

…..to see my answer, and the rest of the interview, go to Benchfly.

Read Full Post »

This first appeared on November 16, 2009, as a feature at ScientificBlogging.com

No one draws pictures of heads with little gears or hydraulics inside any more. The modern conceptualization of the brain is firmly computational. The brain may be wet, squooshy, and easy to serve with an ice cream scooper, but it is nevertheless a computer.

However, there is a rather blaring difficulty with this view, and it is encapsulated in the following question: If our brains are computers, why doesn’t size matter? In the real world of computers, bigger tends to mean smarter. But this is not the case for animals: bigger brains are not generally smarter. Most of the brain size differences across mammals seem to make no behavioral difference at all to the animal.

Instead, the greatest driver of brain size is not how smart the animal is, but how big the animal is. Brain size doesn’t much matter – instead, it is body size that matters. That is not what one would expect of a computer in the head. Brain scientists have long known this. For example, take a look at the plot below showing how brain mass varies with body mass. You can see how tightly correlated they are. If one didn’t know that the brain was the thinking organ and consequently lobbed it into the same pile as the liver, heart and spleen (FYI, I keep my pile of organs in the crawl space), then one would not find it unusual that it increases so much with body size. Organs do that.

But the brain is supposed to be a computer of some strange kind. And yet it is acting just like a lowly organ. It gets bigger merely because the animal’s body is bigger, even though the animal may be no smarter. The plot below, from a 2007 article of mine (in Kaas JH (ed.) Evolution of Nervous Systems. Oxford, Elsevier) shows how behavioral complexity varies with brain mass. There is no correlation. Bigger and bigger brains, and seemingly doing nothing for the animal!

It has long been clear to neuroscientists that what does correlate nicely with animal intellgence is how high above the best-fit line a point is in the brain-versus-body plot we saw earlier. This is called the encephalization quotient, or EQ. It is simply a measure of how big the brain is once one has controlled for body size. EQ matches our intuitive ranking of mammalian intelligence, and in a 2003 paper (in the Journal of Theoretical Biology) I showed that it also matches quantitative measures of their intelligence (namely, the number of items in ethograms as measured by ethologists). The plot is shown below, where you can see that the number of behaviors in each of the mammalian orders rises strongly with EQ.

But although this is well known by neurobiologists, there is still no accepted answer to why brains get bigger with body size. Why should a cow have a brain 200 times larger than a roughly equally smart rat, or 10 times larger than a clearly-smarter house cat? One of my older research areas, in fact, aimed to explain why brains change in the ways they do as they grow in size from mouse to whale (http://www.changizi.com/changizi_lab.html#neocortex), and yet, embarrassingly, I have no idea why these brains are increasing with body size at all. If a dull-witted cow could just stick a tiny rat brain into its head and get all the behavioral complexity it needs, then brains would come in just one size, and I would have had no research to work on concerning the manner in which brains scale up in size.

So, here’s a plan. I would like to hear your hypotheses for why brains increase so quickly with body mass (namely as the 3/4 power). I will let you know if the idea is new, and I will see if I can give your idea a good thrashing. What’s at stake here is our very framework for conceptualizing what the brain is. Perhaps you can say why it is a computer, and that greater body size brings in certain subtle computational demands that explain why brain volume should increase as it does with body mass. Or, more exciting, perhaps you can propose an altogether novel framework for thinking about the brain, one that makes the enigmatic “size matters” issue totally obvious.

To the comments!…

This is where the fun of the piece begins, because at ScientificBlogging.com there were more than 70 comments, all quite productive (no trolls). So, go here and scroll down to the comments.  …and leave one!

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).

o one draws pictures of heads with little gears or hydraulics inside any more. The modern conceptualization of the brain is firmly computational. The brain may be wet, squooshy, and easy to serve with an ice cream scooper, but it is nevertheless a computer.
However, there is a rather blaring difficulty with this view, and it is encapsulated in the following question: If our brains are computers, why doesn’t size matter? In the real world of computers, bigger tends to mean smarter. But this is not the case for animals: bigger brains are not generally smarter. Most of the brain size differences across mammals seem to make no behavioral difference at all to the animal.

Instead, the greatest driver of brain size is not how smart the animal is, but how big the animal is. Brain size doesn’t much matter – instead, it is body size that matters. That is not what one would expect of a computer in the head. Brain scientists have long known this. For example, take a look at the plot below showing how brain mass varies with body mass. You can see how tightly correlated they are. If one didn’t know that the brain was the thinking organ and consequently lobbed it into the same pile as the liver, heart and spleen (FYI, I keep my pile of organs in the crawl space), then one would not find it unusual that it increases so much with body size. Organs do that.

But the brain is supposed to be a computer of some strange kind. And yet it is acting just like a lowly organ. It gets bigger merely because the animal’s body is bigger, even though the animal may be no smarter. The plot below, from a 2007 article of mine (in Kaas JH (ed.) Evolution of Nervous Systems. Oxford, Elsevier) shows how behavioral complexity varies with brain mass. There is no correlation. Bigger and bigger brains, and seemingly doing nothing for the animal!

It has long been clear to neuroscientists that what does correlate nicely with animal intellgence is how high above the best-fit line a point is in the brain-versus-body plot we saw earlier. This is called the encephalization quotient, or EQ. It is simply a measure of how big the brain is once one has controlled for body size. EQ matches our intuitive ranking of mammalian intelligence, and in a 2003 paper (in the Journal of Theoretical Biology) I showed that it also matches quantitative measures of their intelligence (namely, the number of items in ethograms as measured by ethologists). The plot is shown below, where you can see that the number of behaviors in each of the mammalian orders rises strongly with EQ.

But although this is well known by neurobiologists, there is still no accepted answer to why brains get bigger with body size. Why should a cow have a brain 200 times larger than a roughly equally smart rat, or 10 times larger than a clearly-smarter house cat? One of my older research areas, in fact, aimed to explain why brains change in the ways they do as they grow in size from mouse to whale (http://www.changizi.com/changizi_lab.html#neocortex), and yet, embarrassingly, I have no idea why these brains are increasing with body size at all. If a dull-witted cow could just stick a tiny rat brain into its head and get all the behavioral complexity it needs, then brains would come in just one size, and I would have had no research to work on concerning the manner in which brains scale up in size.

So, here’s a plan. I would like to hear your hypotheses for why brains increase so quickly with body mass (namely as the 3/4 power). I will let you know if the idea is new, and I will see if I can give your idea a good thrashing. What’s at stake here is our very framework for conceptualizing what the brain is. Perhaps you can say why it is a computer, and that greater body size brings in certain subtle computational demands that explain why brain volume should increase as it does with body mass. Or, more exciting, perhaps you can propose an altogether novel framework for thinking about the brain, one that makes the enigmatic “size matters” issue totally obvious.

To the comments!…

Comments

It seems to me that total brain mass vs. Body size doesn’t account for different parts of the brain.
Intellegence seems to me to be more related to the percentage of brain mass dedicated to the the Frontal cortex vs the total brain mass. Larger Animals may have need for more brain mass to process more nerve receptors in the larger amount of skin for example, or dedicated to processing Smell. But the part of the brain dedicated to higher level functions may be smaller by some measure (either total mass, or percentage of the rest of the brain mass, etc.)

Mark Changizi's picture

Hi Chuck

“to process more nerve receptors in the larger amount of skin”
Nice. That’s one common hypothesis. And not only more skin and thus more sensory receptors, but more musculature, and so on. But *that* would seem to imply that bigger mammals should have disproportionately larger somatosensory and motor areas, but they don’t.

“dedicated to processing Smell”
But why should larger animals need bigger olfactory neural tissue?

The motor processing functions of an animals brain may be proportionate, but the brain as a whole has to take the total motor processing input and output into account; when you are large and have a complex environment to deal with, you need a concordantly larger brain to deal with it.

Read Full Post »

I was on the Lionel Show / Air America this morning, which was a blast!  Got to talk about my recent book, and about evolution, autistic savants, intelligent design, color, forward-facing eyes, illusions, and more. I really must get off the elliptical machine next time I do a radio show. Here’s the segment with me (or mp3 on your computer).

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).

Read Full Post »

KirstenSanford

Kirsten Sanford

Kirsten Sanford (shown here) and co-host Justin Jackson (sorry Justin, you understand) of This Week in Science interviewed me last week about my research and my recent book, The Vision Revolution.

Here’s the interview, and I don’t start jibber-jabbering until about 33 minutes in.  Notice how they sucker-punch me right in the belly button. (That’s what they mean by “the kickass science podcast”.)

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).

Read Full Post »

Older Posts »