Feeds:
Posts
Comments

Posts Tagged ‘Creativity’

Meghan Casserly recently wrote a piece at Forbes on the role of social media for creativity, and specifically about one’s ability to have more than one identity. Does this help or hurt? Read her piece here.

She discusses a piece I wrote on this issue last year, and which I have put below…

===

Multiple Personality Social Media

For those who have not entered the world of Twitter, it is hard to fathom why people feel compelled to stream their lives to strangers 140 characters at a time. And such non-Twitter folk are also unlikely to fathom the purpose of blogging, especially in a world with more than 170 million blogs. Imagine the non-tweeting non-blogger’s disbelief, then, when they read story after story about how Twitter, Facebook, WordPress, Posterous and the other “Social Web 2.0” heavyweights are changing the world as we know it. “Hogwash!” might be their succinct reply.

But for those of us who have entered the world of social media, it is clear that there’s much more going on than streaming one’s life to strangers. So much is going on, in fact, that you could spend the rest of your life reading hack books on how to “do” social media more ably. And you could choose to connect only with “social media” gurus on Twitter and still acquire more than 100,000 “friends”.

With more than a half billion people hooked into social media, something big is, indeed, happening. But what? There are a variety of candidates to point to: the greater interconnectivity, the nature of the connectivity (e.g., “small world”), the speed at which information courses through the networks, the exposure to a wider variety of people and ideas, the enhanced capability for collaborations, the tight wedding of human connections with web content, and so on.

There is, however, one facet of social media that has gone largely unnoticed: Multiple personalities. Upon soaking myself in social media over the last year, I was surprised to find that many of those most steeped in social media maintain not just one blog, but several (and in some cases more), each devoted to his or her distinct interests. I have also found that it is similarly common to possess multiple Twitter identities; in one case it was weeks before I realized two “friends” were actually a single person. Maintenance of multiple personalities in real-life flesh-and-blood social networks is considerably more difficult.

At first I felt these multiple personalities were vaguely creepy. “Figure out who you are, and stick with it!” was my reaction. But gradually I have come to appreciate multiple personalities (and so have I). In fact, I now believe that the ease with which social media supports multiple personalities is one of the unappreciated powers of the Social Web 2.0.

To understand why multiple personalities are so powerful, let’s back up for a moment and recall what makes economies so innovative. While it helps if the economy is filled with creative entrepreneurs, the fundamental mechanism behind the economy’s genius is not the genius of individuals but the selective forces which enable some entrepreneurs to thrive and others to wither away. Selective forces of this broad kind underlie not just the entrepreneurial world, but also the sciences and the arts.

Scientific communities, for example, chug inexorably forward with discoveries, but this progress occurs by virtue of there being so many independently digging scientists in a community that eventually some scientists strike gold, even if sometimes only serendipitously. Whether entrepreneurial, scientific or artistic, communities can be creative even if a vast majority of their members fail to ever achieve something innovative.

This is where multiple personalities change the game. Whereas individuals were traditionally members of just one community, and risky ventures such as entrepreneurship, science and the arts could get only one roll of the dice, in the age of Social Web 2.0 people can split themselves into multiple selves inhabiting multiple communities. Although too much splitting will dilute the attention that can be given to the distinct personalities and thereby lower the chance that at least one personality succeeds in its alotted realm, with a small number of personalities one may be able to increase the chances that at least one of the personalities succeeds. For example, with two personalities taking their respective shots within two distinct communities, the “owner” of those personalities may have raised the probability that at least one personality succeeds by nearly a factor two (although a factor greater than one is all one would need to justify splitting into two personalities).

With multiple personalities in hand, people can choose to take up creative endeavors they would not have been willing to enter into outside of social media because the risks of failure were too high. Multiple personalities can lower these risks.

One of the greatest underappreciated benefits of social media, then, may be that it brings a greater percentage of the world into creative enterprises they would not otherwise have considered.

This, I submit, is good.

~~~

This first appeared Feb 22, 2010 at Science 2.0.

Mark Changizi is Director of Human Cognition at 2AI, and the author of The Vision Revolution (Benbella Books) and the upcoming book Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella Books). He is working on his fourth book at the moment, tentatively titled Making Faces, about emotions and facial expressions.

Read Full Post »

This week a computer science researcher named Vinay Deolalikar claimed to have a proof that P is not equal to NP.

Let’s set aside what this means for another day, lest I get distracted.  The important thing now is that this is big. Huge, even!

If, that is, he’s correct.

But correct or not, that’s the kind of thing one expects to see in academia. Tenure gives professors job security and research freedom, exactly the conditions needed to enable them to make the non-incremental breakthroughs that fundamentally alter the intellectual landscape. (And in the case of P not equal to NP, to acquire fame and fortune.)

It is illuminating, then, to note that the man behind the new purported proof is not in academia at all, but in industry – at HP Labs.

And he’s not alone in his non-academic status.

For example, infamously reclusive mathematician Grigori Perelman (who proved the Poincare conjecture) rejected tenure-track positions for a research position.

Big discoveries such as these from researchers outside of academia may be symptoms of a deep and systemic illness in academia, an illness which inhibits professors from making big-leap theoretical advances.

The problem is simply this: You can’t write a grant proposal whose aim is to make a theoretical breakthrough.

“Dear National Science Foundation: I plan on scrawling hundreds of pages of notes, mostly hitting dead ends, until, in Year 4, I hit pay-dirt.”

Theoretical breakthroughs can’t be mapped out in advance. You can’t know you’ve broken through until you’re…through.

…at which point there is nothing left to propose to do in a grant application.

“Fine,” you might say. “If you can’t write a grant proposal for theoretical innovation, then don’t bother with grants.”

And now we find the crux of the problem.

In academia grant-getting is paramount. Universities are a business. Not a business of student education, and not a business of fundamental intellectual research. Universities are in the business of securing grant funds. That’s how they survive.  And because grants are the university’s bread and butter, grants become the academic professor’s bread and butter.

Getting grants is the principal key to individual success in academia today. They get you more space, more money, more monikers, more status, and more invitations to lunch with the president.

To ensure one is in the good graces of one’s university, the young creative aspiring assistant professor must immediately begin applying for grants in earnest, at the expense of spending energies on uncertain theoretical innovation.

In order to have the best chance at being funded, one’s proposed work will often be a close cousin of one’s doctoral or post-doctoral work.  And the proposed work – in order to be proposed at all – must be incremental, and consequently applied in some way, to experiments or to the construction of a device of some kind.

So a theorist in academics must set aside his or her theoretical work, and propose to do experimental or applied work, where his or her talents do not lie.

But if you’re good at theory, you really ought to be doing theory, not its application. If Vinay Deolalikar is right about his proof – and probably even if he’s mistaken – then he should be spending his time proving new things, not carrying out a five year plan to, say, build a better gadget based on it. There are others much better at the application side for that.

But that’s where tenure comes in, right? With tenure, professors can forego grants, and become intellectually unhinged (in the good way).

There are severe stumbling blocks, however.

First, once one builds a lab via grant money (on the way to tenure), one’s research inevitably changes. And, without realizing it, one dupes oneself into thinking that the funded research direction is what one does. After all, it is the source of one’s new-found status.

Second, once one has a lab, one does not want to become the person others whisper about as having “lost funding.” The loss of status is too psychologically severe for any mere human to take, and so maintaining funding becomes the priority.

But to keep the funding going, the best strategy is to do more follow-up incremental work. …more of the same.

And in what feels like no time at all, two decades have flown by, and (if you’re “lucky”) you’re the bread-winning star at your university and research discipline.

But success at that game meant you never had time to do the creative theoretical leaps you had once hoped to do. You were transformed by the contemporary academic system into an able grant-getter, and somewhere along the way lost sight of the more fundamental overthrower-of-dogma and idea-monger identity you once strived for.

Were the “P is not equal to NP” proof claimer, Vinay Deolalikar, a good boy of academia, he would have spent his time applying for funding to apply computer science principles to, say, military or medical applications, and not wasted his time with risky years of effort toward proofs like the one he put together, where no grant-funding is at stake for the university. Were Vinay Deolalikar in academia, he’d be implicitly discouraged from such an endeavor.

If we are to have any hope of understanding the brain, for example, then academia must be fixed. The brain is inordinately more complicated than physics, and in much more need of centuries of theoretical advance than physics. Yet theorists in neuroscience striving for revolutionary theoretical game-changers are extremely rare, and often come from outside.

One simple step in the right direction would be to fund the scientist, not the proposal. The best (although still not great) argument that you’re capable of theoretical innovation is that you’ve had one or more before. This solves the dilemma of impossibly proposing to have a seminal theoretical discovery.

In the longer term, more is needed. New models for funding academics must be invented, where the aim is for a system that optimally harnesses the creative potential of professors to change the world, and not just to keep universities afloat.

~~~~

This first appeared August 11, 2010, at Psychology Today.

Mark Changizi is the author of THE VISION REVOLUTION (Benbella, 2009) and HARNESSED: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella, 2011); he was recently attracted to a position outside of academia, as the Professor of Human Cognition at 2AI Labs.

Read Full Post »

“My plan for today:

1. Pick up dry cleaning.
2. Go to dentist.
3. Think up brilliant idea.”

Good luck with that third bullet. Big ideas can’t be planned like growing tomatoes in one’s garden. We stumble upon ideas, and although we can sometimes recall how we got there, we could not have anticipated the discovery in advance. That’s why grant proposals never wrap up as, “And via following this four-part plan, I will have arrived at a ground-breaking new hypothesis by year three.”

Three impossible thoughts before breakfast we can manage, but one great idea before dinner we cannot.

Unplanned ideas are often best illustrated by ‘Eureka!”, or ‘Aha!’, moments, like Einstein’s clock tower moment that sparked his special relativity, or Archimedes’ bathtub water-displacement idea.

Why are great ideas so unanticipatable?

Perhaps ideas cannot be planned because of some peculiarity of our psychology. Had our brains evolved differently, perhaps we would never have Eureka moments.

On the other hand, what if it is much deeper than that? What if the unplannability of ideas is due to the nature of ideas, not our brains at all? What if the computer brain, Hal, from 2001: A Space Odyssey were to say, “Something really cool just occurred to me, Dave!”

In the late 1990s I began work on a new notion of computing which I called “self-monitoring” computation. Rather than having a machine simply follow an algorithm, I required that a machine also “monitor itself.” What this meant was that the the machine must at all stages report how close it is to finishing its work. And, I demanded that the machine’s report not merely be a probabilistic guess, but a number that gets lower on each computation step.

What was the point of these machines? I was hoping to get a handle on the unanticipatability of ideas, and to understand the extent to which Eureka moments are found for any sophisticated machine.

If a problem could be solved via a self-monitoring machine, then that machine would come to a solution without a Eureka moment. But, I wondered, perhaps I would be able to prove that some problems are more difficult to monitor than others. And, perhaps I would be able show that some problems are not monitorable at all -– and thus their solutions necessitate Eureka moments.

On the basis of my description of self-monitoring machines above, one might suspect that I demanded that the machine’s “self-monitoring report” be the number of steps left in the algorithm. But that would require machines to know exactly how many steps they need to finish an algorithm, and that wouldn’t allow machines to compute much.

Instead, the notion of “number” in the self-monitoring report is more subtle (concerning something called “transfinite ordinal numbers”), and can be best understood by your and my favorite thing…

Committee meetings.

Imagine you have been placed on a committee, and must meet weekly until some task is completed. If the task is easy, you may be able to announce at the first meeting that there will be exactly, say, 13 meetings. Usually, however, it will not be possible to know how many meetings will be needed.

Instead, you might announce at the first meeting that there will be three initial meetings, and that at the third meeting the committee will decide how many more meetings will be needed. That one decision about how many more meetings to allow gives the committee greater computational power.

Now the committee is not stuck doing some fixed number of meetings, but can, instead, have three meetings to decide how many meetings it needs. This decision about how many more meetings to have is a “first-order decision.”

And committees can be much more powerful than that.

Rather than deciding after three meetings how many more meetings there will be, you can announce that at the end of those decided-upon number of meetings, you will allow yourself one more first-order decision about how many meetings there will be. The decision in this case is to allow two first-order decisions about meetings (the first occurring after three initial meetings).

You are now beginning to see how you as the committee head could allow the committee any number of first-order decisions about more meetings. And the more first-order decisions allowed, the more complicated the task the committee can handle.

Even with all these first-order decisions, committees can get themselves yet more computational power by allowing themselves second-order decisions, which concern how many first-order decisions the committee will be allowed to have. So, you could decide that on the seventh meeting the committee will undertake a second-order decision, i.e., a decision about how many first-order decisions it will allow itself.

And once you realize you are allowed second-order decisions, why not use third-order decisions (about the number of second-order decisions to allow yourself), or fourth-order decisions, and so on.

Committees who follow a protocol of this kind will always be able to report how close they are to finishing their work. Not “close” in the sense of the exact number of meetings. But “close” in the sense of the number of decisions left at all the different levels. And, after each meeting, the report of how close they are to finishing always gets lower.

And when such a committee does finish, the fact that it finished (and solved whatever problem it was tasked) will not have come as a surprise to itself. Instead, you as committee chair will say, “We’re done, as we foresaw from our previous meetings.”

My self-monitoring machines carry out their self-monitoring in the same fashion as in the committee examples I just gave. (See the little appendix at the end for some examples.)

What does this have to do with the Eureka moment!?

Some problems are harder to self-monitor than others, in the sense of requiring a higher tier in the self-monitoring hierarchy just mentioned. Such problems are possible to solve while self-monitoring –- and thus possible to solve without a Eureka moment -– but may simply be too difficult to monitor.

Thus, one potential reason for why a machine has an ‘Aha!’ moment is that it simply fails to monitor itself, perhaps because it is too taxing to do so at the required level (even though the problem was in principle monitorable). Such Eureka moments could potentially have been made without Eureka moments.

Here, though, is the surprising bit that I proved…

Of all the problems that machines can solve, only a fraction of them are monitorable at all.

The class of problems that are monitorable turns out to be a computationally meager class compared to the entire set of problems within the power of machines.

Therefore, most of the interesting problems that exist cannot be solved without a Eureka moment!

What does this mean for our creative efforts?

It means you have to be patient.

When you are carrying out idea-creation efforts, you are implementing some kind of program, and odds are good it may not be monitorable even in principle. And even if it is monitorable, you are likely to have little or no idea at which level to monitor it. (A problem being monitorable doesn’t mean it is obvious how to do so.)

The scary part of idea-mongering is that you don’t know if you will ever get another idea. And even if an oracle told you that there will be one, you have no way of knowing how long it will take.

It takes a sort of inner faith to allow yourself to work months or years on idea generation, with no assurance there will be a pay-off!

But what is the alternative? The space of problems for which you can gauge how close you are to solving it is meager.

I’d rather leave the door open to a great idea that comes with no assurance than be assured I will have a meager idea. You can keep your nilla wafer –– I’m rolling the dice for raspberry cheesecake!

+++++++++++++++++++++++++++++++++

(The journal article on this is here http://www.changizi.com/ord.pdf, but I warn you it is eminently unreadable! An unpublished, readable paper I wrote on it back then can also be found here: http://www.changizi.com/Aha.pdf )

+++++++++++++++++++++++++++++++++

Appendix: Some examples of self-monitoring machines doing computations

For example, suppose the machine can add 1 on each step.  Then a self-monitoring machine can compute the function “y=x+7” via allowing itself only seven steps, or “meetings”. No matter the input x, it just adds 1 at each step, and it will be done.

To handle “y=2x”, a machine must allow itself one (first-order) decision, which will be to allow itself x steps, and add 1, x many times, starting from x. (This corresponds to having a self-monitoring level of omega, the first transfinite ordinal. For “y=kx”, the level would be omega * (k-1).)

In order to monitor “y=x^2” (i.e., “x squared”) it no longer suffices to allow oneself some fixed number of first-order decisions. One needs x many first-order decisions, and what x is changes depending on the input. So now the machine needs one second-order decision about how many first-order decisions it needs. Upon receiving x=17 as input, the machine will decide that it needs 16 more first-order decisions, and its first first-order decision will be to allow itself 17 steps (to add one) before making its next first-order decision. (This corresponds to transfinite ordinal omega squared. If the equation were “y=x^2 + k”, for example, the ordinal would be omega^2 + k.)

This hierarchy keeps going, to omega^omega, to omega^omega^omega, and so on.

~~~

Mark Changizi is Professor of Human Cognition at 2AI, and the author of The Vision Revolution (Benbella Books) and the upcoming book Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella Books).

This piece first appeared July 6, 2010, at Science 2.0.

Read Full Post »

The hardback of The Vision Revolution has been out for one year, and I couldn’t be happier with the reaction it has received, including reviews in fantastic places like the Wall Street Journal and Sciam Mind and mentions in places like the New York Times. It even made New Scientist’s “best books of 2009” story! Soon it will appear in China, Korea and Germany.

There has, however, been one gnawing problem with the hardback.

…the problem is its hardbackiness.

To understand my trouble with hardbackiness, let me back up and explain what I was aiming for in writing the book.

As a start, let me first describe what I was not aiming for: Not an academic monograph, to be read only by specialists. Not a journalist-style coverage of a topic. And not a book about how to help your brain, like “20 ways to make your brain smarter than the Johnson’s next door.”

My aim was not only to write a book that is readable (and funny) to non-specialists (i.e., a “trade” or “popular” book). Rather, my aim was to build a book that is part of the scientific conversation.

By “part of the scientific conversation,” I mean that the book is filled with ideas and evidence that go beyond what is found in the technical journal articles.

That, I believe, is what makes a popular science book exciting to non-specialists and laymen: in reading the book they are not merely learning about science, but are witnessing a portion of the lively scientific exchange.

The reader is put within the scientific conversation itself.

I didn’t come to this philosophy about what makes a good popular science book on my own. As I struggled with the drafts of my first trade book proposals, I had the opportunity to meet with John Brockman (http://www.edge.org/3rd_culture/bios/brockman.html ), the noted literary agent, author and the founder of The Edge ( http://www.edge.org ). That’s his photo at the top. It was he who laid out this good-popular-science book philosophy to me, and although it sounded obvious after he said it, it by no means was obvious to me beforehand.

That’s what makes authors like Desmond Morris, Richard Dawkins, Stephen Pinker. Daniel Dennett and Andy Clark so compelling. It’s not merely that they write well, but that they’re making a scientific case for their viewpoint. …and you and I get to watch.

And so that’s what I did in The Vision Revolution, take the reader along as I lay out the case for a radical re-thinking of how we see. Color vision evolved for seeing skin and the underlying emotions, not for finding fruit. Forward-facing eyes evolved for seeing better in forests, not for seeing in depth. Illusions are due to our brain’s attempt to correct for the neural eye-to-brain delay, so as to “perceive the present.” And our ability to read is due to writing having culturally evolved to make written words look like natural objects, just what our illiterate visual system is competent at processing.

In aiming to be part of the lively scientific exchange, there was another thing I tried to inject into the book: I tried to not take things too seriously.

As I have discussed in an earlier piece [ http://www.science20.com/mark_changizi/mind_hacks_over_stacks_facts ], too often science is treated as a set of textbook facts. Textbooks usually give that impression, and even when they are careful to say that science is in fact deeply in flux, the textbook look and feel dupes most of us into imbuing the book with too much truthiness. This is especially a problem for the cognitive and brain sciences, because the object of study is the most complicated object in the known universe, and we very often don’t know what we’re talking about. (We don’t know jack: https://changizi.wordpress.com/2009/08/12/18/ )

And that brings me back to the most significant flaw with the hardback version of The Vision Revolution: its hardbackiness. The rigidity of a hardback suggests truthiness, and although I do believe the ideas I put forth and defend in the book are true, I don’t want the cover’s hardness to be part of my argument.

Luckily, The Vision Revolution is now out in paperback, and is so remarkably bendy that the reader cannot help but to read with that engaged maybe-this-is-not-correct mindset, rather than the oh-look-at-all-those-true-things-science-has-figured-out mindset. With that mindset, the reader will be in the right mindset to truly be “part of the scientific conversation.”

~~~

Mark Changizi is Professor of Human Cognition at 2AI, and the author of The Vision Revolution (Benbella Books) and the upcoming book Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella Books).

This piece first appeared June 21, 2010, at Science 2.0. And, no, Brockman is not my agent.

Read Full Post »

As I lay inside the box in the pitch blackness waiting for the show to begin, I wonder if the operator forgot to start it. Nothing is happening – no sound, no sights…nothing at all. Ah, wait, did I just hear something? Maybe, although perhaps that was just part of the box’s machinery I am not supposed to hear. But now I’m hearing it again, more distinctly – a raspy visceral groaning.

Definitely the show has begun!

And now I feel it. The floor of the box upon which I am lying is doing…something. Yes, it’s vibrating, first under my shoulders, then under my feet, and now moving up my back.

I shift my weight on the firm rubber floor of the box, and the sounds and vibrations suddenly amplify. Did the box just react to me? I try shifting my weight again, and the box replies with a waterfall of tactile and auditory stimulation. The box is alive, and responding to my actions with an auditory-tactile symphony.

This 15 minute-long show-in-a-box is titled “Just Noticeable Difference,” and is the brainchild of Chris Salter, an artist and professor from Concordia University and author of Entangled: Technology and the Transformation of Performance (MIT Press). I got the opportunity to enter the box when his piece came to the Experimental Media and Performing Arts Center at Renssleaer Polytechnic Institute this Winter, and I was asked to participate in a panel discussion about the work. Salter’s piece is an example of avant-garde, or experimental, art, because it pushes the boundaries of artistic experience.

And that brings me to the point of this piece I’m writing: my awakening to the importance of experimental arts.

You see, I  have not always appreciated the experimental arts. As recently as several years ago I had the “scrooge” philosophy of the experimental arts, summarized aptly by “Ba humbug!” My reasoning at the time was that scientists are interested in creative work as well, but have more solid criteria by which to judge whether the scientific “product” is sensible. There seemed to be no standards for the experimental arts, as evidenced by the sheer unconstrained artistic craziness one finds among the avant-garde. (If I had a nickel every time I have found myself in a gallery asking, “Was it really necessary to use bodily fluid in the piece?”)

And here comes the irony… While I was bemoaning the “unconstrained artistic craziness” of the avant-garde artists, I was teaching my own science students to engage in unconstrained scientific craziness!

As a theorist, I live and breathe ideas, and must continuously come up with new ones aiming to explain heretofore unexplained scientific phenomenon. As a mentor to young scientists in training, it occurred to me that students need to learn more than just the science. They need to learn how to get an idea – how to discover. So I began an attempt to isolate principles that I thought had been helpful in my own creative processes, principles that eventually led me to begin writing a book elaborating on these principles, a book tentatively titled ALOOF: How Not Giving a Damn Maximizes Your Creativity.

In addition to principles with labels such as “Master of None,” “Aloof” and “Sloth,” the principle most relevant to the avant-garde I called “Crazy”, and it goes something like this:

If it’s not crazy, it’s not worth pursuing.

If your idea isn’t crazy, then even if you can successfully show it to be true, people will say, “Yeah, I pretty much expected that.” That’s not what one wants to aim for! One wants to aim for the crazy, so that when you’re finished people will say, “Well I didn’t expect that!”

This “Crazy” advice (and the other advice I put together) I believed applied to any creative endeavor, not just to science – and my book, ALOOF, was intended for artists and entrepreneurs in addition to scientists.

Let’s sum up the state of my mind at this point. I criticized avant-garde artists for their craziness, all the while explicitly aiming for craziness as a scientist! In effect, I was teaching my students to be avant-garde scientists, and trying myself to be an avant-garde scientist, yet somehow failing to notice that this outlook had transformative implications for my view of avant-garde art.

Just like me, avant-garde artists are trying to be “crazy”, hoping to make that next non-incremental advance. There’s a method to the madness: the madness is a fundamental facet of the mechanism of the creative process, a process that eventually can break new artistic ground.

Although avant-garde artists and scientists have similarities to the extent they are aiming for non-incremental advances, there are fundamental differences.

One especially relevant difference is that a scientist can usually tell whether his or her crazy idea will work without publishing it and making it public. In my case, for example, my notebooks are filled with hundreds of truly embarrassingly crazy ideas, most which thankfully never get broadcast. The standards by which my (hopefully crazy) science ideas stand or fall emanate from logic, parsimony and empirical fit, things I can gauge in-house.

Artists, on the other hand, cannot know for sure whether their crazy new idea works unless they try it out on people. Success or failure for the artist depends on how the piece “acts” upon the brains of viewers. Artists cannot hit the perfect aesthetic chord each time any more than a scientist can divine a new discovery without many failures. And usually the experimental art will “fail,” in the sense that it doesn’t tap into a rich new vein of artistic experience.  And when experimental art fails, it will often be a more public failure than the failures for the creative scientist. Avant-garde artists must be brave!

So cut the experimental artists some slack. A lot of their work is crazy, indeed.  And perhaps if experimental artists tried harder, all their work would be crazy.

This first appeared on April 26, 2010, as a feature at Science 2.0

=============

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).


Read Full Post »

For those who have not entered the world of Twitter, it is hard to fathom why people feel compelled to stream their lives to strangers 140 characters at a time. And such non-Twitter folk are also unlikely to fathom the purpose of blogging, especially in a world with more than 170 million blogs. Imagine the non-tweeting non-blogger’s disbelief, then, when they read story after story about how Twitter, Facebook, WordPress, Posterous and the other “Social Web 2.0” heavyweights are changing the world as we know it. “Hogwash!” might be their succinct reply.

But for those of us who have entered the world of social media, it is clear that there’s much more going on than streaming one’s life to strangers. So much is going on, in fact, that you could spend the rest of your life reading hack books on how to “do” social media more ably. And you could choose to connect only with “social media” gurus on Twitter and still acquire more than 100,000 “friends”.

With more than a half billion people hooked into social media, something big is, indeed, happening. But what? There are a variety of candidates to point to: the greater interconnectivity, the nature of the connectivity (e.g., “small world”), the speed at which information courses through the networks, the exposure to a wider variety of people and ideas, the enhanced capability for collaborations, the tight wedding of human connections with web content, and so on.

There is, however, one facet of social media that has gone largely unnoticed: Multiple personalities. Upon soaking myself in social media over the last year, I was surprised to find that many of those most steeped in social media maintain not just one blog, but several (and in some cases more), each devoted to his or her distinct interests. I have also found that it is similarly common to possess multiple Twitter identities; in one case it was weeks before I realized two “friends” were actually a single person. Maintenance of multiple personalities in real-life flesh-and-blood social networks is considerably more difficult.

At first I felt these multiple personalities were vaguely creepy. “Figure out who you are, and stick with it!” was my reaction. But gradually I have come to appreciate multiple personalities (and so have I). In fact, I now believe that the ease with which social media supports multiple personalities is one of the unappreciated powers of the Social Web 2.0.

To understand why multiple personalities are so powerful, let’s back up for a moment and recall what makes economies so innovative. While it helps if the economy is filled with creative entrepreneurs, the fundamental mechanism behind the economy’s genius is not the genius of individuals but the selective forces which enable some entrepreneurs to thrive and others to wither away. Selective forces of this broad kind underlie not just the entrepreneurial world, but also the sciences and the arts.

Scientific communities, for example, chug inexorably forward with discoveries, but this progress occurs by virtue of there being so many independently digging scientists in a community that eventually some scientists strike gold, even if sometimes only serendipitously. Whether entrepreneurial, scientific or artistic, communities can be creative even if a vast majority of their members fail to ever achieve something innovative.

This is where multiple personalities change the game. Whereas individuals were traditionally members of just one community, and risky ventures such as entrepreneurship, science and the arts could get only one roll of the dice, in the age of Social Web 2.0 people can split themselves into multiple selves inhabiting multiple communities. Although too much splitting will dilute the attention that can be given to the distinct personalities and thereby lower the chance that at least one personality succeeds in its alotted realm, with a small number of personalities one may be able to increase the chances that at least one of the personalities succeeds. For example, with two personalities taking their respective shots within two distinct communities, the “owner” of those personalities may have raised the probability that at least one personality succeeds by nearly a factor two (although a factor greater than one is all one would need to justify splitting into two personalities).

With multiple personalities in hand, people can choose to take up creative endeavors they would not have been willing to enter into outside of social media because the risks of failure were too high. Multiple personalities can lower these risks.

One of the greatest underappreciated benefits of social media, then, may be that it brings a greater percentage of the world into creative enterprises they would not otherwise have considered.

This, I submit, is good.

This first appeared on February 22, 2010, as a feature at ScientificBlogging.com.

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).

Read Full Post »

Benchfly is web site for scientists, especially lab scientists. I’ve been interviewed there in the past, and have since then kept my eye on it, and on Alan Marnett, it’s founder. The organization of science, grants, and creativity is something often on my mind, as well as looking into alternative ways to structure science (like here and here).

But whereas I have just jaw-jawed about alternative mechanisms for science, Benchfly is actually putting forth one. See the “Search for Research” video at their main page. Their three minute video describes the mechanism, which roughly concerns using a “Search for Research” toolbar for one’s searches, the advertising revenue which gets piped into a research pot, and matched by a donor (right now, SIGMA Life Sciences). Then, one writes a 100 word — yes, nearly Tweet-like in length! — to get a microgrant (e.g., $500). If toolbar users like the grantees research, more might come his or her way.

It’s too early, of course, to tell if this will work. But it’s that kind of dynamic market creativity that will eventually work, and Benchfly / Alan Marnett seems smart enough to do it!

Read Full Post »

This first appeared on January 7, 2010, as a feature at ScientificBlogging.com.

“Respected expert and director of the institute…” These are the words you hear as you are being introduced at a black-tie speaking engagement. You are an inventor, scientist, or artist, and this flattering introduction is music to your ears; had you seen these words written in the paper you would have saved a copy to show Mom. Finally, you are at the place every creative mind wishes to reach. The words wash back over you. “Respected”: The members of your community appreciate you. “Expert”: Your more than twenty years of dedication to the field have not gone unnoticed. “Director”: You have powerful tools and competent personnel to support your efforts. And “Institute”: Your work has attracted the funding of government, benefactors or investors.

You are liked, smart, powerful and rich! You’ve really made it!

Or have you? As an artist, scientist or inventor, success is defined in terms of your ideas – how many did you have that panned out, and how many were big? Being liked, smart, powerful and rich may be nice, but one can have these things and not have had the ideas that count toward the successful creative life. In fact, these seemingly nice things – being respected in one’s community, being an expert, having powerful tools, and having financial support – are a scourge on one’s creative potential. In order to harvest your full creative potential one must be…indifferent.

Indifferent to one’s community, indifferent to one’s previous talents or successful endeavors, indifferent to the tools one might have thus far accrued, and indifferent to sources of funding. Masters of ceremonies at black-tie events are unlikely to introduce a speaker as “the not particularly well-respected jack-of-all-trades and luddite penny-pincher,” but that is the signature of the creative individual extraordinaire. The actual introduction by the master of ceremonies sounds much nicer than this, but it is the signature of a creativity that was long-ago crushed; it is a eulogy for the dynamic idea-generating person you never came to be.

But why would indifference be helpful to creativity? Indifference helps an individual’s creativity because it helps the brain act more like a community of brains, and it is communities of brains where we find the greatest success stories for idea generation. Scientific, artistic and engineering communities are fantastically creative because there are many individuals working in parallel, each competitively striving for the next great idea.

Although most individuals in a community may not be successful at finding the next big idea, there will inevitably be some individuals who will be successful, even if only by accident. Individual scientists, artists and engineers tend to be utterly unlike these dynamic communities. Individuals tend to work serially, not in parallel; and individuals tend to concentrate their digging in one spot, rather than many. These tendencies for individuals are fine for the health of a creative community, but if one wants to be a creative individual, then one must ensure that one’s strategy for digging optimizes one’s own chance at hitting gold.

That sounds simple enough: in order for an individual to act like a community of idea-seekers, one must just carry out multiple directions of idea-generation in parallel. Dig many holes, not just one. However, it is exceedingly difficult for people to actually do this. The difficulty is not intellectual – we are, in principle, able to act like a (small) community of idea-generating individuals. The difficulty, instead, is psychological. We may be the smartest animals on Earth, but we are still animals, great apes in particular.

As such, we come with a suite of psychological attributes that, although especially helpful for surviving and reproducing among other humans in our ancient evolutionary environment, handicap us as idea hunters. Our handicaps center around the fact that we cannot help but desire to be the “respected expert and director of the institute,” a desire that inevitably kills the internal community needed inside a creative individual, and, instead, places our mind firmly within an external creativity-smothering community. The cure is to become indifferent, detached, aloof. … from communities, money, tools and even oneself.

(See also this ScientificBlogging piece on the benefits of being aloof: http://www.scientificblogging.com/mark_changizi/value_being_aloof_or_how… .)

Aloofily yours,

Mark Changizi

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).

Read Full Post »

Benchfly’s Alan Marnett hit me with an in-depth interview Dec 16, 2009. In addition to getting into the science, the nice thing about the interview was the opportunity to talk about different ways of being a scientist. As you’ll see, I suggest being an aloof son-of-a-bitch, something I also talk about in this piece titled “How Not to Get Absorbed in Someone Else’s Abdomen“.

—————————————

As research scientists, many of us spend a very large amount of time working on a very small subject.  In fact, it’s not unusual for a biochemist to go through their entire career without ever physically observing the protein or pathway they work on.  As we hyper-focus on our own niche of science, we run the risk of forgetting to take the blinders off to see where our slice of work fits in to the rest of the pie.

Changizi

For Dr. Mark Changizi, assistant professor and author of The Vision Revolution, science starts with the pie.  We spoke with Dr. Changizi about why losing focus on the big picture can hurt our research, how autistic savants show us the real capacity of the brain and what humans will look like a million years from now.

BenchFly: Your book presents theories on questions ranging from why our eyes face forward to why we see in color.  Big questions.  As a kid, was it your attraction to the big questions that drew you into science?

Mark Changizi: I sometimes distinguish between two motivations for going into science. First there’s the “radio kid,” the one who takes apart the radio, is always fascinated with how things work, and is especially interested in “getting in there” and manipulating the world. And then there’s the “Carl Sagan kid,” the one motivated by the romantic what-does-it-all-mean questions. The beauty of Sagan’s Cosmos series is that he packaged science in such a way that it fills the more “religious” parts of one’s brain. You tap into that in a kid’s mind, and you can motivate them in a much more robust way than you can from a here’s-how-things-work motivation. I’m a Carl Sagan kid, and was specifically further spurred on by Sagan’s Cosmos. As long as I can remember, my stated goal in life has been to “answer the questions to the universe.”

While that aim has stayed constant, my views on what counts as “the questions to the universe” have changed. As a kid, cosmology and particle physics were where I thought the biggest questions lied. But later I reasoned that there were even more fundamental questions; even if physics were different than what we have in our universe, math would be the same. In particular, I became fascinated with mathematical logic and the undecidability results, the area of my dissertation. With those results, one can often make interesting claims about the ultimate limits on thinking machines. But it is not just math that is more fundamental than physics – that math is more fundamental than physics is obvious. In a universe without our physics, the emergent principles governing complex organisms and evolving systems may still be the same as those found in our universe. Even economic and political principles, in this light, may be deeper than physics: five-dimensional aliens floating in goo in a universe with quite different physics may still have limited resources, and may end up with the same economic and political principles we fuss over.

So perhaps that goes some way to explaining my research interests.

Tell us a little about both the scientific and thought processes when tackling questions that are very difficult to actually prove beyond a shadow of a doubt.

This is science we’re talking about, of course, not math, so nothing in science is proven in the strong mathematical sense. It is all about data supporting one’s hypothesis, and all about the parsimonious nature of the hypothesis.  Parsimony aims for explaining the greatest range of data with the simplest amount of theory. That’s what I aim for.

But it can, indeed, be difficult to find data for the kinds of questions I am interested in, because they often make predictions about a large swathe of data nobody has. That’s why I typically have to generate 50 to 100 ideas in my research notes before I find one that’s not only a good idea, but one for which I can find data to test it. You can’t go around writing papers without new data to test it. If you want to be a theorist, then not only can you not afford to spend the time to become an experimentalist to test your question, but most of your questions may not be testable by any set of experiments you could hope to do in a reasonable period of time. Often it requires pooling together data from across an entire literature.

In basic research we are often hyper-focused on the details.  To understand a complex problem, we start very simple and then assume we will eventually be able to assemble the disparate parts into a single, clear picture.  In essence, you think about problems in the opposite direction- asking the big questions up front.  Describe the philosophical difference between the two approaches, as well as their relationship in the process of discovery.

A lot of people believe that by going straight to the parts – to the mechanism – they can eventually come to understand the organism. The problem is that the mechanisms in biology were selected to do stuff, to carry out certain functions. The mechanisms can only be understood as mechanisms that implement certain functions. That’s what it means to understand a mechanism: one must say how the physical material manages to carry out a certain set of functional capabilities.

And that means one must get into the business of building and testing hypotheses about what the mechanism is for. Why did that mechanism evolve in the first place? There is a certain “reductive” strain within the biological and brain sciences that believes that science has no role for getting into questions of “why”. That’s “just so story” stuff.  Although there’s plenty of just-so-stories – i.e., bad science – in the study of the design and function of biological structure, it by no means needs to be. It can be good science, just like any other area of science. One just needs to make testable hypotheses, and then go test it. And it is not appreciated how often reductive types themselves are in the business of just-so-stories; e.g., computational simulators are concerned just with the mechanisms and often eschew worrying about the functional level, but then allow themselves a dozen or more free parameters in their simulation to fit the data.

So, you have got to attack the functional level in order to understand organisms, and you really need to do that before, or at least in parallel with, the study of the mechanisms.

But in order to understand the functional level, one must go beyond the organism itself, to the environment in which the animal evolved. One needs to devise and test hypotheses about what the biological structure was selected for, and must often refer to the world. One can’t just stay inside the meat to understand the meat.

Looking just at the mechanisms is not only not sufficient, but will tend to lead to futility. An organism’s mechanisms were selected to function only when the “inputs” were the natural ones the organism would have encountered. But when you present a mechanism with an utterly unnatural input, the meat doesn’t output, “Sorry, that’s not an ecologically appropriate input.” (In fact, there are results in theoretical computer science saying that it wouldn’t be generally possible to have a mechanism capable of having such a response.) Instead, the mechanism does something. If you’re studying the mechanism without an appreciation for what it’s for, you’ll have teems and teems of mechanistic reactions that are irrelevant to what it is designed for, but you won’t know it.

The example I often use is the stapler. Drop a stapler into a primitive tribe, and imagine what they do to it. Having no idea what it’s for, they manage to push and pull its mechanisms in all sorts of irrelevant ways. They might spend years, say, carefully studying the mechanisms underlying why it falls as it does when dropped from a tree, or how it functions as crude numchucks. There are literally infinitely many aspects of the stapler mechanism that could be experimented upon, but only a small fraction are relevant to the stapler’s function, which is to fasten paper together.

In explaining why we see in color, you suggest that it allows us to detect the subtleties of complex emotions expressed by humans – such as blushing.  Does this mean colorblind men actually have a legitimate excuse for not understanding women?!

…..to see my answer, and the rest of the interview, go to Benchfly.

Read Full Post »

Male anglerfish are born with an innate desire to not exist. As soon as a male reaches maturity, he acquires an urge to find a female, sink his teeth into her, and grow into her. This evolved because anglerfish live in the dark ocean abyss with few mating opportunities. By giving up his life to be part of the female, the male can reproduce more often. It’s not clear he can appreciate all the sex he’s getting, however, because much of his body and brain atrophies and fuses with her body. Nevertheless, that’s where male anglerfish want to be – that’s a full male anglerfish life. And you thought you had problems. At least you’re not partially absorbed in someone else’s abdomen. Let’s toast our fortune: We are not male anglerfish!

See http://powerfodder.tumblr.com/post/292745035/will-carey-anglerfish-changizi-community

Creative community of anglerfish trying to absorb you. (Will Carey)

Or are we? Although we have no innate drive to stick our heads into the sides of other people, we do have a drive to stick our heads into groups of people – into communities, tribes, villages and clubs. We’re social primates, and a full human life is centered on the communities we’re in, and our place within them. There aren’t many hermits, and most that are probably wish they weren’t. Communities of people have bulls-eyes on them that are irresistible to us humans. Although communities are necessary for a full life – e.g., family, bowling league, and civil war reenactment society – there are some communities that are especially damaging to one’s creative health. Creative communities – they are the creativity killers. For scientists, for example, their female anglerfish is the community of scientists, a community which is creative as a whole, but which tends to snuff out the creativity of individuals within it. Not only are these creative communities dangerous to one’s creativity, but they seductively attract creativity-seeking individuals into them like moths to a creativity-scorching flame.

That creative communities are alluring to the aspiring creativity maven is not surprising: we all want friends who understand what we do and appreciate our accomplishments. What is surprising, and is not widely recognized, is the extent to which these creative communities are destructive. The problem for the male anglerfish is that his entire world becomes shrunken down, from a three-dimensional world of objects and adventures to a zero-dimensional world of gamete-release. The problem for us is that we’re equipped with a brain that, upon being placed within a community, reacts by severely shrinking its view of the world. Once the psychological transformation has completed, one’s view of the world has become so radically constricted that one cannot see the world beyond the community.

The source of this shrinkage is something called “adaptation,” or “habituation.” When you walk from a bright sunny street to a dimly lit pub, the pub initially feels entirely dark inside. After a while, however, your eyes habituate to the low light level, and you see it as highly varied in light level: it looks dark inside that mouse-hole in the wall, bright where the uncovered light bulb is, and, scattered around the room, you see dozens of other light-levels spanning the dark-light range. This is clearly advantageous for you, because you effectively began as blind in the pub, and minutes later could see. In order to make it happen, though, you underwent a kind of “world shrinkage,” in particular a kind of “luminance shrinkage,” where luminance refers to the amount of light coming toward your eye from different directions around you. When you first entered the pub, all the differing luminance levels in the pub were treated by your visual system as pretty much the same, namely “very very dark”; at that point in time your eyes were habituated to the wide world of luminances found on a sunny day outside. The “sunny” world of luminances differs in two respects from the “pub” world of luminances. First, the average luminance in sunny world is much higher than that in pub world. Second, and more important for our purposes here, sunny world has a much wider range of luminances than in pub world – from the high luminance of a sun-reflecting car windshield to the low luminance of the gaps in a sewer grating. Our eyes have the ability not only to adapt to new light levels (e.g., high versus low), but also to new levels of variability (e.g., wide versus narrow). When you habituate from sunny world to pub world, your eyes and visual system treat the tiny range of luminance levels found in pub world as if they are just as wide as the range of luminances found in sunny world. Your entire perceptual space for brightness has shrunk down to apply to what is a miniscule world in terms of luminance. This kind of world shrinkage is one of the many engineering features that make mammals like us so effective. All our senses are built with these adaptation mechanisms at work, and not just for simple features like luminance or color, but also complex images like faces.

In fact, our heads are teeming with world-shrinking mechanisms that go far beyond our senses, invading the way we think and reason. When we enter a creative community, varieties of adaptation mechanisms are automatically elicited inside us, helping to illuminate the intellectual world inside the community. Ideas within the community that were impossible for us to distinguish become stark oppositions. Similar mechanisms are played out for our social world – the hierarchies we care to climb, and the people we care to impress. At first we don’t appreciate the status differences within the hierarchy, even if we abstractly know them; but eventually we come to “feel” the gulf between each tier. While having these mechanisms is fundamental to our success in tribes, and was thus selected for, our creative integrity was not on the evolutionary ledger. Creative communities are dank pubs, and once we’ve optimized ourselves to living on the inside, our full range of reasoning is brought to bear on a narrow spectrum of ideas, a spectrum that we’re under the illusion is as wide as it can be. And so we don’t realize the world has shrunk at all.

Mark Changizi is Professor of Cognitive Science at RPI, the author of The Vision Revolution (Benbella, 2009) and The Brain from 25,000 Feet (Kluwer, 2003), and is aloof.

[A nice story somewhat related to this in ScienceDaily.]

Read Full Post »