Feeds:
Posts
Comments

Archive for the ‘Creativity’ Category

A view of "Plastic Animal Chess" in action, on a tile floor.

I just wrote a piece for Wired UK on creativity and child-like irreverence, and I talk about the game my daughter invented, a variant on chess. I had taken some photos, though, that did not make it into the story.

The first is this one here at the top, an animal-level view of the game in action on the tile floor of my sun room.

The second is below, my daughter’s hand-written rules themselves, with blanks where we filled in the animals used for each piece type.

My eight-year-old daughter's hand-written rules for plastic animal chess.

UPDATE: Wired has published a “Part 2” to my daughter’s chesscapades… In this new piece, they write up my daughter’s hand-written rules, as well as put in some rules not quite in her rules, because she left a lot ambiguous. When she and I played, she chose a 5 by 17 tile stretch of the sun room, with the pieces at opposite ends. I’ll need to get her to remind me of the exact starting positions.

UPDATE: On the May 6, 2011, episode of ‘The Big Bang Theory’, they had a funny bit with Sheldon creating a radical new form of geeky chess. Could it be that they saw the story and it motivated their bit? Who knows? But, between you and me, I’m telling my little girl she’s responsible for the bit.

~~~

For more on irreverence and creativity, see irreverence and aloofness and… anglerfish.

~~~

Mark Changizi is Director of Human Cognition at 2AI, and the author of The Vision Revolution (Benbella Books, 2009) and the upcoming book Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella Books, 2011). His first book was The Brain from 25000 Feet(Springer, 2003).

Read Full Post »

Meghan Casserly recently wrote a piece at Forbes on the role of social media for creativity, and specifically about one’s ability to have more than one identity. Does this help or hurt? Read her piece here.

She discusses a piece I wrote on this issue last year, and which I have put below…

===

Multiple Personality Social Media

For those who have not entered the world of Twitter, it is hard to fathom why people feel compelled to stream their lives to strangers 140 characters at a time. And such non-Twitter folk are also unlikely to fathom the purpose of blogging, especially in a world with more than 170 million blogs. Imagine the non-tweeting non-blogger’s disbelief, then, when they read story after story about how Twitter, Facebook, WordPress, Posterous and the other “Social Web 2.0” heavyweights are changing the world as we know it. “Hogwash!” might be their succinct reply.

But for those of us who have entered the world of social media, it is clear that there’s much more going on than streaming one’s life to strangers. So much is going on, in fact, that you could spend the rest of your life reading hack books on how to “do” social media more ably. And you could choose to connect only with “social media” gurus on Twitter and still acquire more than 100,000 “friends”.

With more than a half billion people hooked into social media, something big is, indeed, happening. But what? There are a variety of candidates to point to: the greater interconnectivity, the nature of the connectivity (e.g., “small world”), the speed at which information courses through the networks, the exposure to a wider variety of people and ideas, the enhanced capability for collaborations, the tight wedding of human connections with web content, and so on.

There is, however, one facet of social media that has gone largely unnoticed: Multiple personalities. Upon soaking myself in social media over the last year, I was surprised to find that many of those most steeped in social media maintain not just one blog, but several (and in some cases more), each devoted to his or her distinct interests. I have also found that it is similarly common to possess multiple Twitter identities; in one case it was weeks before I realized two “friends” were actually a single person. Maintenance of multiple personalities in real-life flesh-and-blood social networks is considerably more difficult.

At first I felt these multiple personalities were vaguely creepy. “Figure out who you are, and stick with it!” was my reaction. But gradually I have come to appreciate multiple personalities (and so have I). In fact, I now believe that the ease with which social media supports multiple personalities is one of the unappreciated powers of the Social Web 2.0.

To understand why multiple personalities are so powerful, let’s back up for a moment and recall what makes economies so innovative. While it helps if the economy is filled with creative entrepreneurs, the fundamental mechanism behind the economy’s genius is not the genius of individuals but the selective forces which enable some entrepreneurs to thrive and others to wither away. Selective forces of this broad kind underlie not just the entrepreneurial world, but also the sciences and the arts.

Scientific communities, for example, chug inexorably forward with discoveries, but this progress occurs by virtue of there being so many independently digging scientists in a community that eventually some scientists strike gold, even if sometimes only serendipitously. Whether entrepreneurial, scientific or artistic, communities can be creative even if a vast majority of their members fail to ever achieve something innovative.

This is where multiple personalities change the game. Whereas individuals were traditionally members of just one community, and risky ventures such as entrepreneurship, science and the arts could get only one roll of the dice, in the age of Social Web 2.0 people can split themselves into multiple selves inhabiting multiple communities. Although too much splitting will dilute the attention that can be given to the distinct personalities and thereby lower the chance that at least one personality succeeds in its alotted realm, with a small number of personalities one may be able to increase the chances that at least one of the personalities succeeds. For example, with two personalities taking their respective shots within two distinct communities, the “owner” of those personalities may have raised the probability that at least one personality succeeds by nearly a factor two (although a factor greater than one is all one would need to justify splitting into two personalities).

With multiple personalities in hand, people can choose to take up creative endeavors they would not have been willing to enter into outside of social media because the risks of failure were too high. Multiple personalities can lower these risks.

One of the greatest underappreciated benefits of social media, then, may be that it brings a greater percentage of the world into creative enterprises they would not otherwise have considered.

This, I submit, is good.

~~~

This first appeared Feb 22, 2010 at Science 2.0.

Mark Changizi is Director of Human Cognition at 2AI, and the author of The Vision Revolution (Benbella Books) and the upcoming book Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella Books). He is working on his fourth book at the moment, tentatively titled Making Faces, about emotions and facial expressions.

Read Full Post »

This week a computer science researcher named Vinay Deolalikar claimed to have a proof that P is not equal to NP.

Let’s set aside what this means for another day, lest I get distracted.  The important thing now is that this is big. Huge, even!

If, that is, he’s correct.

But correct or not, that’s the kind of thing one expects to see in academia. Tenure gives professors job security and research freedom, exactly the conditions needed to enable them to make the non-incremental breakthroughs that fundamentally alter the intellectual landscape. (And in the case of P not equal to NP, to acquire fame and fortune.)

It is illuminating, then, to note that the man behind the new purported proof is not in academia at all, but in industry – at HP Labs.

And he’s not alone in his non-academic status.

For example, infamously reclusive mathematician Grigori Perelman (who proved the Poincare conjecture) rejected tenure-track positions for a research position.

Big discoveries such as these from researchers outside of academia may be symptoms of a deep and systemic illness in academia, an illness which inhibits professors from making big-leap theoretical advances.

The problem is simply this: You can’t write a grant proposal whose aim is to make a theoretical breakthrough.

“Dear National Science Foundation: I plan on scrawling hundreds of pages of notes, mostly hitting dead ends, until, in Year 4, I hit pay-dirt.”

Theoretical breakthroughs can’t be mapped out in advance. You can’t know you’ve broken through until you’re…through.

…at which point there is nothing left to propose to do in a grant application.

“Fine,” you might say. “If you can’t write a grant proposal for theoretical innovation, then don’t bother with grants.”

And now we find the crux of the problem.

In academia grant-getting is paramount. Universities are a business. Not a business of student education, and not a business of fundamental intellectual research. Universities are in the business of securing grant funds. That’s how they survive.  And because grants are the university’s bread and butter, grants become the academic professor’s bread and butter.

Getting grants is the principal key to individual success in academia today. They get you more space, more money, more monikers, more status, and more invitations to lunch with the president.

To ensure one is in the good graces of one’s university, the young creative aspiring assistant professor must immediately begin applying for grants in earnest, at the expense of spending energies on uncertain theoretical innovation.

In order to have the best chance at being funded, one’s proposed work will often be a close cousin of one’s doctoral or post-doctoral work.  And the proposed work – in order to be proposed at all – must be incremental, and consequently applied in some way, to experiments or to the construction of a device of some kind.

So a theorist in academics must set aside his or her theoretical work, and propose to do experimental or applied work, where his or her talents do not lie.

But if you’re good at theory, you really ought to be doing theory, not its application. If Vinay Deolalikar is right about his proof – and probably even if he’s mistaken – then he should be spending his time proving new things, not carrying out a five year plan to, say, build a better gadget based on it. There are others much better at the application side for that.

But that’s where tenure comes in, right? With tenure, professors can forego grants, and become intellectually unhinged (in the good way).

There are severe stumbling blocks, however.

First, once one builds a lab via grant money (on the way to tenure), one’s research inevitably changes. And, without realizing it, one dupes oneself into thinking that the funded research direction is what one does. After all, it is the source of one’s new-found status.

Second, once one has a lab, one does not want to become the person others whisper about as having “lost funding.” The loss of status is too psychologically severe for any mere human to take, and so maintaining funding becomes the priority.

But to keep the funding going, the best strategy is to do more follow-up incremental work. …more of the same.

And in what feels like no time at all, two decades have flown by, and (if you’re “lucky”) you’re the bread-winning star at your university and research discipline.

But success at that game meant you never had time to do the creative theoretical leaps you had once hoped to do. You were transformed by the contemporary academic system into an able grant-getter, and somewhere along the way lost sight of the more fundamental overthrower-of-dogma and idea-monger identity you once strived for.

Were the “P is not equal to NP” proof claimer, Vinay Deolalikar, a good boy of academia, he would have spent his time applying for funding to apply computer science principles to, say, military or medical applications, and not wasted his time with risky years of effort toward proofs like the one he put together, where no grant-funding is at stake for the university. Were Vinay Deolalikar in academia, he’d be implicitly discouraged from such an endeavor.

If we are to have any hope of understanding the brain, for example, then academia must be fixed. The brain is inordinately more complicated than physics, and in much more need of centuries of theoretical advance than physics. Yet theorists in neuroscience striving for revolutionary theoretical game-changers are extremely rare, and often come from outside.

One simple step in the right direction would be to fund the scientist, not the proposal. The best (although still not great) argument that you’re capable of theoretical innovation is that you’ve had one or more before. This solves the dilemma of impossibly proposing to have a seminal theoretical discovery.

In the longer term, more is needed. New models for funding academics must be invented, where the aim is for a system that optimally harnesses the creative potential of professors to change the world, and not just to keep universities afloat.

~~~~

This first appeared August 11, 2010, at Psychology Today.

Mark Changizi is the author of THE VISION REVOLUTION (Benbella, 2009) and HARNESSED: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella, 2011); he was recently attracted to a position outside of academia, as the Professor of Human Cognition at 2AI Labs.

Read Full Post »

Yesterday I mentioned Carl Sagan and Cosmos to one of my summer research interns.

“Who is Carl Sagan?” he asked.

“You know, billions and billions of stars…,” I implored. [Yes, I know he apparently never actually said that in the show.]

“Nope. Never heard of him,” he responded, worried now that I was disappointed in his knowledge base.

I was disappointed. Not at my student, but that the show that had helped coax me forward in the sciences has lost so much ground that many of today’s college students have never even heard of it, much less seen it.

What’s the big deal with Sagan’s Cosmos? There have been, after all, loads of television shows about the sciences and cosmology over the years, and many of the more recent ones have been much more elaborately produced than Cosmos.

Cosmos is different, though, and in my experiences over the years I have found many (mostly of my generation) that agreed. More than any other show (it seems to me), Cosmos affected people and propelled them into the sciences.

As an undergraduate, I had noticed that my fellow physics majors could be approximately split into two categories. The first group I called “radio-kids.” These were the students who, as children, had enjoyed disassembling the radio and putting it back together again, sometimes with improvements. The second group of physics students I labeled “Sagan kids.” These were the students who, as kids, hardly knew that radios were built out of parts, but had watched and were propelled forward by Carl Sagan’s Cosmos series. I was a Sagan kid.

The distinction between radio kids and Sagan kids also maps nicely onto two broadly distinct strategies for motivating kids to enter the sciences.

Radio kids represent the “get practical” approach to science motivation. You want students to be pulled into science? Then make it relevant to their everyday lives – show them that science is useful.

That’s the argument, at least. But there are at least two glaring problems with the “get practical” strategy.

The first problem is that it is not quite true that the sciences are useful. As a scientist, I tend to feel that I employ my training in lots of aspects of my life, although I may be fooling myself. And even if we scientists do tend to employ our “science” for practical reasons, there’s no avoiding the fact that much of the world gets on with the practicalities of life without much science in their head. And, at any rate, wouldn’t an MBA be more practical than a physics or science major for most students, in terms of helping to ensure themselves a good job?

The second problem with the “get practical” strategy to science motivation is this: ‘practical’ is boring! People aren’t motivated to change the direction of their life for practicality. They can certainly be brow-beaten into choosing a practical major (e.g., by their parents or by “good” practical sense), but this is a grudging and unromantic choice.

Whereas radio kids represent the “get practical” approach to science education and motivation, Sagan kids represent the “life, the universe and everything” strategy. Such a strategy taps into one’s “spiritual” or “religious” brain, getting at one’s romantic desire to figure out “what it all means” and “why there is anything…at all.” That’s the kind of motivation that can redirect a life into a science.

And this is what Sagan’s Cosmos had in spades. If you haven’t seen the series, then it will sound ridiculously corny when you learn that Sagan would sit in a futuristic space ship and travel at much-greater-than-light-speed throughout the universe, with inspiring electronic elevator-esque music by Vangelis Papathanassiou (probably corny to kids today, but still awesome to me). When Sagan wasn’t riding his sleek space ship, we were treated to the great scientists over history in period garb, struggling with their attempts to grasp the universe.

The result: a show designed to harness our “religious sense,” even though Sagan was resolutely non-religious (as am I). That’s what Carl Sagan’s got that Michio Kaku’s not (although Kaku comes closer than anyone else since Sagan).

The moral of Sagan’s Cosmos is to utterly reject the “get practical” strategy, and instead aim for the least practical direction of all, toward life, the universe and everything.

Lucky for us, Cosmos is available for free on Hulu and Netflix. But it’s time for the next generation of science educators to comprehend Sagan’s secret sauce, and to use it liberally on the kids of today.

~~~

This first appeared July 2, 2010, at Psychology Today.

Mark Changizi is Professor of Human Cognition at 2AI, and the author of The Vision Revolution (Benbella Books) and the upcoming book Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella Books).

Read Full Post »

“My plan for today:

1. Pick up dry cleaning.
2. Go to dentist.
3. Think up brilliant idea.”

Good luck with that third bullet. Big ideas can’t be planned like growing tomatoes in one’s garden. We stumble upon ideas, and although we can sometimes recall how we got there, we could not have anticipated the discovery in advance. That’s why grant proposals never wrap up as, “And via following this four-part plan, I will have arrived at a ground-breaking new hypothesis by year three.”

Three impossible thoughts before breakfast we can manage, but one great idea before dinner we cannot.

Unplanned ideas are often best illustrated by ‘Eureka!”, or ‘Aha!’, moments, like Einstein’s clock tower moment that sparked his special relativity, or Archimedes’ bathtub water-displacement idea.

Why are great ideas so unanticipatable?

Perhaps ideas cannot be planned because of some peculiarity of our psychology. Had our brains evolved differently, perhaps we would never have Eureka moments.

On the other hand, what if it is much deeper than that? What if the unplannability of ideas is due to the nature of ideas, not our brains at all? What if the computer brain, Hal, from 2001: A Space Odyssey were to say, “Something really cool just occurred to me, Dave!”

In the late 1990s I began work on a new notion of computing which I called “self-monitoring” computation. Rather than having a machine simply follow an algorithm, I required that a machine also “monitor itself.” What this meant was that the the machine must at all stages report how close it is to finishing its work. And, I demanded that the machine’s report not merely be a probabilistic guess, but a number that gets lower on each computation step.

What was the point of these machines? I was hoping to get a handle on the unanticipatability of ideas, and to understand the extent to which Eureka moments are found for any sophisticated machine.

If a problem could be solved via a self-monitoring machine, then that machine would come to a solution without a Eureka moment. But, I wondered, perhaps I would be able to prove that some problems are more difficult to monitor than others. And, perhaps I would be able show that some problems are not monitorable at all -– and thus their solutions necessitate Eureka moments.

On the basis of my description of self-monitoring machines above, one might suspect that I demanded that the machine’s “self-monitoring report” be the number of steps left in the algorithm. But that would require machines to know exactly how many steps they need to finish an algorithm, and that wouldn’t allow machines to compute much.

Instead, the notion of “number” in the self-monitoring report is more subtle (concerning something called “transfinite ordinal numbers”), and can be best understood by your and my favorite thing…

Committee meetings.

Imagine you have been placed on a committee, and must meet weekly until some task is completed. If the task is easy, you may be able to announce at the first meeting that there will be exactly, say, 13 meetings. Usually, however, it will not be possible to know how many meetings will be needed.

Instead, you might announce at the first meeting that there will be three initial meetings, and that at the third meeting the committee will decide how many more meetings will be needed. That one decision about how many more meetings to allow gives the committee greater computational power.

Now the committee is not stuck doing some fixed number of meetings, but can, instead, have three meetings to decide how many meetings it needs. This decision about how many more meetings to have is a “first-order decision.”

And committees can be much more powerful than that.

Rather than deciding after three meetings how many more meetings there will be, you can announce that at the end of those decided-upon number of meetings, you will allow yourself one more first-order decision about how many meetings there will be. The decision in this case is to allow two first-order decisions about meetings (the first occurring after three initial meetings).

You are now beginning to see how you as the committee head could allow the committee any number of first-order decisions about more meetings. And the more first-order decisions allowed, the more complicated the task the committee can handle.

Even with all these first-order decisions, committees can get themselves yet more computational power by allowing themselves second-order decisions, which concern how many first-order decisions the committee will be allowed to have. So, you could decide that on the seventh meeting the committee will undertake a second-order decision, i.e., a decision about how many first-order decisions it will allow itself.

And once you realize you are allowed second-order decisions, why not use third-order decisions (about the number of second-order decisions to allow yourself), or fourth-order decisions, and so on.

Committees who follow a protocol of this kind will always be able to report how close they are to finishing their work. Not “close” in the sense of the exact number of meetings. But “close” in the sense of the number of decisions left at all the different levels. And, after each meeting, the report of how close they are to finishing always gets lower.

And when such a committee does finish, the fact that it finished (and solved whatever problem it was tasked) will not have come as a surprise to itself. Instead, you as committee chair will say, “We’re done, as we foresaw from our previous meetings.”

My self-monitoring machines carry out their self-monitoring in the same fashion as in the committee examples I just gave. (See the little appendix at the end for some examples.)

What does this have to do with the Eureka moment!?

Some problems are harder to self-monitor than others, in the sense of requiring a higher tier in the self-monitoring hierarchy just mentioned. Such problems are possible to solve while self-monitoring –- and thus possible to solve without a Eureka moment -– but may simply be too difficult to monitor.

Thus, one potential reason for why a machine has an ‘Aha!’ moment is that it simply fails to monitor itself, perhaps because it is too taxing to do so at the required level (even though the problem was in principle monitorable). Such Eureka moments could potentially have been made without Eureka moments.

Here, though, is the surprising bit that I proved…

Of all the problems that machines can solve, only a fraction of them are monitorable at all.

The class of problems that are monitorable turns out to be a computationally meager class compared to the entire set of problems within the power of machines.

Therefore, most of the interesting problems that exist cannot be solved without a Eureka moment!

What does this mean for our creative efforts?

It means you have to be patient.

When you are carrying out idea-creation efforts, you are implementing some kind of program, and odds are good it may not be monitorable even in principle. And even if it is monitorable, you are likely to have little or no idea at which level to monitor it. (A problem being monitorable doesn’t mean it is obvious how to do so.)

The scary part of idea-mongering is that you don’t know if you will ever get another idea. And even if an oracle told you that there will be one, you have no way of knowing how long it will take.

It takes a sort of inner faith to allow yourself to work months or years on idea generation, with no assurance there will be a pay-off!

But what is the alternative? The space of problems for which you can gauge how close you are to solving it is meager.

I’d rather leave the door open to a great idea that comes with no assurance than be assured I will have a meager idea. You can keep your nilla wafer –– I’m rolling the dice for raspberry cheesecake!

+++++++++++++++++++++++++++++++++

(The journal article on this is here http://www.changizi.com/ord.pdf, but I warn you it is eminently unreadable! An unpublished, readable paper I wrote on it back then can also be found here: http://www.changizi.com/Aha.pdf )

+++++++++++++++++++++++++++++++++

Appendix: Some examples of self-monitoring machines doing computations

For example, suppose the machine can add 1 on each step.  Then a self-monitoring machine can compute the function “y=x+7” via allowing itself only seven steps, or “meetings”. No matter the input x, it just adds 1 at each step, and it will be done.

To handle “y=2x”, a machine must allow itself one (first-order) decision, which will be to allow itself x steps, and add 1, x many times, starting from x. (This corresponds to having a self-monitoring level of omega, the first transfinite ordinal. For “y=kx”, the level would be omega * (k-1).)

In order to monitor “y=x^2” (i.e., “x squared”) it no longer suffices to allow oneself some fixed number of first-order decisions. One needs x many first-order decisions, and what x is changes depending on the input. So now the machine needs one second-order decision about how many first-order decisions it needs. Upon receiving x=17 as input, the machine will decide that it needs 16 more first-order decisions, and its first first-order decision will be to allow itself 17 steps (to add one) before making its next first-order decision. (This corresponds to transfinite ordinal omega squared. If the equation were “y=x^2 + k”, for example, the ordinal would be omega^2 + k.)

This hierarchy keeps going, to omega^omega, to omega^omega^omega, and so on.

~~~

Mark Changizi is Professor of Human Cognition at 2AI, and the author of The Vision Revolution (Benbella Books) and the upcoming book Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella Books).

This piece first appeared July 6, 2010, at Science 2.0.

Read Full Post »

The hardback of The Vision Revolution has been out for one year, and I couldn’t be happier with the reaction it has received, including reviews in fantastic places like the Wall Street Journal and Sciam Mind and mentions in places like the New York Times. It even made New Scientist’s “best books of 2009” story! Soon it will appear in China, Korea and Germany.

There has, however, been one gnawing problem with the hardback.

…the problem is its hardbackiness.

To understand my trouble with hardbackiness, let me back up and explain what I was aiming for in writing the book.

As a start, let me first describe what I was not aiming for: Not an academic monograph, to be read only by specialists. Not a journalist-style coverage of a topic. And not a book about how to help your brain, like “20 ways to make your brain smarter than the Johnson’s next door.”

My aim was not only to write a book that is readable (and funny) to non-specialists (i.e., a “trade” or “popular” book). Rather, my aim was to build a book that is part of the scientific conversation.

By “part of the scientific conversation,” I mean that the book is filled with ideas and evidence that go beyond what is found in the technical journal articles.

That, I believe, is what makes a popular science book exciting to non-specialists and laymen: in reading the book they are not merely learning about science, but are witnessing a portion of the lively scientific exchange.

The reader is put within the scientific conversation itself.

I didn’t come to this philosophy about what makes a good popular science book on my own. As I struggled with the drafts of my first trade book proposals, I had the opportunity to meet with John Brockman (http://www.edge.org/3rd_culture/bios/brockman.html ), the noted literary agent, author and the founder of The Edge ( http://www.edge.org ). That’s his photo at the top. It was he who laid out this good-popular-science book philosophy to me, and although it sounded obvious after he said it, it by no means was obvious to me beforehand.

That’s what makes authors like Desmond Morris, Richard Dawkins, Stephen Pinker. Daniel Dennett and Andy Clark so compelling. It’s not merely that they write well, but that they’re making a scientific case for their viewpoint. …and you and I get to watch.

And so that’s what I did in The Vision Revolution, take the reader along as I lay out the case for a radical re-thinking of how we see. Color vision evolved for seeing skin and the underlying emotions, not for finding fruit. Forward-facing eyes evolved for seeing better in forests, not for seeing in depth. Illusions are due to our brain’s attempt to correct for the neural eye-to-brain delay, so as to “perceive the present.” And our ability to read is due to writing having culturally evolved to make written words look like natural objects, just what our illiterate visual system is competent at processing.

In aiming to be part of the lively scientific exchange, there was another thing I tried to inject into the book: I tried to not take things too seriously.

As I have discussed in an earlier piece [ http://www.science20.com/mark_changizi/mind_hacks_over_stacks_facts ], too often science is treated as a set of textbook facts. Textbooks usually give that impression, and even when they are careful to say that science is in fact deeply in flux, the textbook look and feel dupes most of us into imbuing the book with too much truthiness. This is especially a problem for the cognitive and brain sciences, because the object of study is the most complicated object in the known universe, and we very often don’t know what we’re talking about. (We don’t know jack: https://changizi.wordpress.com/2009/08/12/18/ )

And that brings me back to the most significant flaw with the hardback version of The Vision Revolution: its hardbackiness. The rigidity of a hardback suggests truthiness, and although I do believe the ideas I put forth and defend in the book are true, I don’t want the cover’s hardness to be part of my argument.

Luckily, The Vision Revolution is now out in paperback, and is so remarkably bendy that the reader cannot help but to read with that engaged maybe-this-is-not-correct mindset, rather than the oh-look-at-all-those-true-things-science-has-figured-out mindset. With that mindset, the reader will be in the right mindset to truly be “part of the scientific conversation.”

~~~

Mark Changizi is Professor of Human Cognition at 2AI, and the author of The Vision Revolution (Benbella Books) and the upcoming book Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella Books).

This piece first appeared June 21, 2010, at Science 2.0. And, no, Brockman is not my agent.

Read Full Post »

You are an idea-monger. Science, art, technology – it doesn’t matter which. What matters is that you’re all about the idea.

You live for it. You’re the one who wakes your spouse at 3 AM to describe your new inspiration. You’re the person who suddenly veers the car to the shoulder to scribble some thoughts on the back of an unpaid parking ticket. You’re the one who, during your wedding speech, interrupts yourself to say, “Hey, I just thought of something neat.”

You’re not merely interested in science, art or technology – you want to be part of the story of these broad communities. You don’t just want to read the book – you want to be in the book. …not for the sake of celebrity, but for the sake of getting your idea out there. You enjoy these creative disciplines in the way pigs enjoy mud: so up close and personal that you are dripping with it, having become part of the mud itself.

Enthusiasm for ideas is what makes an idea-monger, but enthusiasm is not enough for success. What is the secret behind people who are proficient idea-mongers? What is behind the people who have a knack for putting forward ideas that become part of the story of science, art and technology?

Here’s the answer many will give: Genius. There are a select few who are born with a gift for generating brilliant ideas beyond the ken of the rest of us. The idea-monger might well check to see that he or she has the “genius” gene, and if not, set off to go monger something else.

Luckily, there’s more to having a successful creative life than hoping for the right DNA. In fact, DNA has nothing to do with it.

“Genius” is a fiction. It is a throw-back to antiquity, where scientists of the day had the bad habit of “explaining” some phenomenon by labeling it as having some special essence. The idea of “the genius” is imbued with a special, almost magical quality. Great ideas just pop into the heads of geniuses in sudden eureka moments; geniuses make leaps that are unfathomable to us, and sometimes even to them; geniuses are qualitatively different; geniuses are special.

While most people labeled as a genius are probably somewhat smart, most smart people don’t get labeled as geniuses.

I believe that it is because there are no geniuses, not, at least, in the qualitatively-special sense. Instead, what makes some people better at idea-mongering is their style, their philosophy, their manner of hunting ideas.

Whereas good hunters of big game are simply called good hunters, good hunters of big ideas are called geniuses – but they only deserve the monicker “good idea-hunter”.

If genius is not a prerequisite for good idea-hunting, then perhaps we can take courses in idea-hunting. And there would appear to be lots of skilled idea-hunters from whom we may learn.

There are, however, fewer skilled idea-hunters than there might at first seem.

One must distinguish between the successful hunter, and the proficient hunter – between the one-time fisherman who accidentally bags a 200 lb fish, and the experienced fisherman who regularly comes home with a big one (even if not 200 lbs).

Communities can be creative even when no individual member is a skilled idea-hunter. This is because communities are dynamic evolving environments, and with enough individuals, there will always be people who do in fact generate fantastically successful ideas. There will always be successful idea-hunters within creative communities, even if these individuals are not skilled idea-hunters, i.e., even if they are unlikely to ever achieve the same caliber of idea again.

One wants to learn to fish from the fisherman who repeatedly comes home with a big one; these multiple successful huntings are evidence that the fisherman is a skilled fish-hunter, not just a lucky tourist with a record catch.

And what is the key behind proficient idea-hunters?

In a word: aloof.

Being aloof – from people, from money, from tools, and from oneself – endows one’s brain with amplified creativity. Being aloof turns an obsessive, conservative, social, scheming status-seeking brain into a bubbly, dynamic brain that resembles in many respects a creative community of individuals.

Being a successful idea-hunter requires understanding the field (whether science, art or technology), but acquiring the skill of idea-hunting itself requires taking active measures to “break out” from the ape brains evolution gave us, by being aloof.

I’ll have more to say about this over the next year, as I have begun writing my fourth book, tentatively titled Aloof: How Not Giving a Damn Maximizes Your Creativity. (See also http://bit.ly/9gdXhJ and http://bit.ly/5GuvmE for other pieces of mine on this general topic.) In the mean time, I would be grateful for your ideas about what makes a skilled idea-hunter. If a student asked you how to be creative, how would you respond?

Mark Changizi is Professor of Human Cognition at 2AI, and the author of The Vision Revolution (Benbella Books) and the upcoming book Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella Books).

This piece first appeared May 25, 2010, at ScriptPhd.

Read Full Post »

As I lay inside the box in the pitch blackness waiting for the show to begin, I wonder if the operator forgot to start it. Nothing is happening – no sound, no sights…nothing at all. Ah, wait, did I just hear something? Maybe, although perhaps that was just part of the box’s machinery I am not supposed to hear. But now I’m hearing it again, more distinctly – a raspy visceral groaning.

Definitely the show has begun!

And now I feel it. The floor of the box upon which I am lying is doing…something. Yes, it’s vibrating, first under my shoulders, then under my feet, and now moving up my back.

I shift my weight on the firm rubber floor of the box, and the sounds and vibrations suddenly amplify. Did the box just react to me? I try shifting my weight again, and the box replies with a waterfall of tactile and auditory stimulation. The box is alive, and responding to my actions with an auditory-tactile symphony.

This 15 minute-long show-in-a-box is titled “Just Noticeable Difference,” and is the brainchild of Chris Salter, an artist and professor from Concordia University and author of Entangled: Technology and the Transformation of Performance (MIT Press). I got the opportunity to enter the box when his piece came to the Experimental Media and Performing Arts Center at Renssleaer Polytechnic Institute this Winter, and I was asked to participate in a panel discussion about the work. Salter’s piece is an example of avant-garde, or experimental, art, because it pushes the boundaries of artistic experience.

And that brings me to the point of this piece I’m writing: my awakening to the importance of experimental arts.

You see, I  have not always appreciated the experimental arts. As recently as several years ago I had the “scrooge” philosophy of the experimental arts, summarized aptly by “Ba humbug!” My reasoning at the time was that scientists are interested in creative work as well, but have more solid criteria by which to judge whether the scientific “product” is sensible. There seemed to be no standards for the experimental arts, as evidenced by the sheer unconstrained artistic craziness one finds among the avant-garde. (If I had a nickel every time I have found myself in a gallery asking, “Was it really necessary to use bodily fluid in the piece?”)

And here comes the irony… While I was bemoaning the “unconstrained artistic craziness” of the avant-garde artists, I was teaching my own science students to engage in unconstrained scientific craziness!

As a theorist, I live and breathe ideas, and must continuously come up with new ones aiming to explain heretofore unexplained scientific phenomenon. As a mentor to young scientists in training, it occurred to me that students need to learn more than just the science. They need to learn how to get an idea – how to discover. So I began an attempt to isolate principles that I thought had been helpful in my own creative processes, principles that eventually led me to begin writing a book elaborating on these principles, a book tentatively titled ALOOF: How Not Giving a Damn Maximizes Your Creativity.

In addition to principles with labels such as “Master of None,” “Aloof” and “Sloth,” the principle most relevant to the avant-garde I called “Crazy”, and it goes something like this:

If it’s not crazy, it’s not worth pursuing.

If your idea isn’t crazy, then even if you can successfully show it to be true, people will say, “Yeah, I pretty much expected that.” That’s not what one wants to aim for! One wants to aim for the crazy, so that when you’re finished people will say, “Well I didn’t expect that!”

This “Crazy” advice (and the other advice I put together) I believed applied to any creative endeavor, not just to science – and my book, ALOOF, was intended for artists and entrepreneurs in addition to scientists.

Let’s sum up the state of my mind at this point. I criticized avant-garde artists for their craziness, all the while explicitly aiming for craziness as a scientist! In effect, I was teaching my students to be avant-garde scientists, and trying myself to be an avant-garde scientist, yet somehow failing to notice that this outlook had transformative implications for my view of avant-garde art.

Just like me, avant-garde artists are trying to be “crazy”, hoping to make that next non-incremental advance. There’s a method to the madness: the madness is a fundamental facet of the mechanism of the creative process, a process that eventually can break new artistic ground.

Although avant-garde artists and scientists have similarities to the extent they are aiming for non-incremental advances, there are fundamental differences.

One especially relevant difference is that a scientist can usually tell whether his or her crazy idea will work without publishing it and making it public. In my case, for example, my notebooks are filled with hundreds of truly embarrassingly crazy ideas, most which thankfully never get broadcast. The standards by which my (hopefully crazy) science ideas stand or fall emanate from logic, parsimony and empirical fit, things I can gauge in-house.

Artists, on the other hand, cannot know for sure whether their crazy new idea works unless they try it out on people. Success or failure for the artist depends on how the piece “acts” upon the brains of viewers. Artists cannot hit the perfect aesthetic chord each time any more than a scientist can divine a new discovery without many failures. And usually the experimental art will “fail,” in the sense that it doesn’t tap into a rich new vein of artistic experience.  And when experimental art fails, it will often be a more public failure than the failures for the creative scientist. Avant-garde artists must be brave!

So cut the experimental artists some slack. A lot of their work is crazy, indeed.  And perhaps if experimental artists tried harder, all their work would be crazy.

This first appeared on April 26, 2010, as a feature at Science 2.0

=============

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).


Read Full Post »

For those who have not entered the world of Twitter, it is hard to fathom why people feel compelled to stream their lives to strangers 140 characters at a time. And such non-Twitter folk are also unlikely to fathom the purpose of blogging, especially in a world with more than 170 million blogs. Imagine the non-tweeting non-blogger’s disbelief, then, when they read story after story about how Twitter, Facebook, WordPress, Posterous and the other “Social Web 2.0” heavyweights are changing the world as we know it. “Hogwash!” might be their succinct reply.

But for those of us who have entered the world of social media, it is clear that there’s much more going on than streaming one’s life to strangers. So much is going on, in fact, that you could spend the rest of your life reading hack books on how to “do” social media more ably. And you could choose to connect only with “social media” gurus on Twitter and still acquire more than 100,000 “friends”.

With more than a half billion people hooked into social media, something big is, indeed, happening. But what? There are a variety of candidates to point to: the greater interconnectivity, the nature of the connectivity (e.g., “small world”), the speed at which information courses through the networks, the exposure to a wider variety of people and ideas, the enhanced capability for collaborations, the tight wedding of human connections with web content, and so on.

There is, however, one facet of social media that has gone largely unnoticed: Multiple personalities. Upon soaking myself in social media over the last year, I was surprised to find that many of those most steeped in social media maintain not just one blog, but several (and in some cases more), each devoted to his or her distinct interests. I have also found that it is similarly common to possess multiple Twitter identities; in one case it was weeks before I realized two “friends” were actually a single person. Maintenance of multiple personalities in real-life flesh-and-blood social networks is considerably more difficult.

At first I felt these multiple personalities were vaguely creepy. “Figure out who you are, and stick with it!” was my reaction. But gradually I have come to appreciate multiple personalities (and so have I). In fact, I now believe that the ease with which social media supports multiple personalities is one of the unappreciated powers of the Social Web 2.0.

To understand why multiple personalities are so powerful, let’s back up for a moment and recall what makes economies so innovative. While it helps if the economy is filled with creative entrepreneurs, the fundamental mechanism behind the economy’s genius is not the genius of individuals but the selective forces which enable some entrepreneurs to thrive and others to wither away. Selective forces of this broad kind underlie not just the entrepreneurial world, but also the sciences and the arts.

Scientific communities, for example, chug inexorably forward with discoveries, but this progress occurs by virtue of there being so many independently digging scientists in a community that eventually some scientists strike gold, even if sometimes only serendipitously. Whether entrepreneurial, scientific or artistic, communities can be creative even if a vast majority of their members fail to ever achieve something innovative.

This is where multiple personalities change the game. Whereas individuals were traditionally members of just one community, and risky ventures such as entrepreneurship, science and the arts could get only one roll of the dice, in the age of Social Web 2.0 people can split themselves into multiple selves inhabiting multiple communities. Although too much splitting will dilute the attention that can be given to the distinct personalities and thereby lower the chance that at least one personality succeeds in its alotted realm, with a small number of personalities one may be able to increase the chances that at least one of the personalities succeeds. For example, with two personalities taking their respective shots within two distinct communities, the “owner” of those personalities may have raised the probability that at least one personality succeeds by nearly a factor two (although a factor greater than one is all one would need to justify splitting into two personalities).

With multiple personalities in hand, people can choose to take up creative endeavors they would not have been willing to enter into outside of social media because the risks of failure were too high. Multiple personalities can lower these risks.

One of the greatest underappreciated benefits of social media, then, may be that it brings a greater percentage of the world into creative enterprises they would not otherwise have considered.

This, I submit, is good.

This first appeared on February 22, 2010, as a feature at ScientificBlogging.com.

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).

Read Full Post »

Markets work well when there’s a chain from wholesaler to retailer to customer…and back. If none of the customer payments makes it back to the wholesaler, soon there may be few to no wholesalers producing anything worth buying. That’s bad for wholesalers, bad for retailers, and bad for customers. That’s why, for example, Napster, Youtube and torrents upset the system.

Now let’s consider the analog for science journalism [or science communication, more generally], which aims to bring science to the public. If we were to try to force science journalism into the wholesale-retail-customer stamp, then scientists would be the wholesalers, science journalists the retailers, and the interested layperson the customer.

Although there’s something right about the analogy, there’s something deeply missing as well: there’s no “payback” to the science wholesaler. Unlike retailers, science journalists don’t pay scientists when they write about their discoveries. The incentives that are crucial to the functioning wholesale-retail-customer loop are not all accounted for, because the link from retail back to wholesale is missing. (In fairness, though, it is not entirely true that there are no incentives for the scientist wholesaler: science journalists provide exposure to a scientist’s work, which can have values of its own. These incentives are fairly weak, however, and appear to be valued by only a small fraction of scientists, such as trade book authors.)

Despite some prima facie similarities, then, science journalism is not currently playing the role of retailer in science. And although the “products” of scientists are being utilized by science journalists, scientists don’t amount to wholesalers because they are driven to do their research via incentives quite independent of market mechanisms.

Let’s consider, though, what would happen if the “incentive link” from science journalist to scientist were made. What if the role of science journalism were not just to fill the demand of the populace – i.e., to provide great science stories (packaged better than scientist-wholesalers generally can) for the minds of the populace – but also to serve as retail-style incentivizers for scientists? What if the role of science journalism were two-way: from scientist to laymen, but also the other way around?

How are science journalists to communicate to scientists the interests of the populace, and to motivate scientists to pursue research filling that demand? By paying scientists a cut of whatever they make, of course!

If you’re a science journalist, don’t stop reading quite yet.

Yes, science journalism today is on the skids, and it would appear that the last thing a science journalist needs to worry about is sending money to ol’ Changizi. However, we must remember to take our zero-sum-game hats off. By putting the right incentives in place, one can grow markets from dry or dead ones. Without the appropriate incentives, wholesalers stop producing, and retailers are left with nothing customers are interested to buy – no market.

But with the right incentives, tremendously rich and diverse markets can be tapped into, as wholesalers are motivated to create great products. Retailers get richer, not poorer, when they pay for products they sell.

Although this is all obvious for markets, would it work for science journalism? If market mechanisms were assembled that siphoned off some of the science journalism profit and channeled it to the scientists responsible for the discoveries, scientists would become more motivated to communicate their research to science journalists, but also more motivated to carry out research likely to be considered interesting to some niche market of laymen science consumers. With more scientists incentivized to carry out research on problems that are interesting to some swathe of consumers, science journalists may more quickly get their store shelves filled with highly sought after product.

And with more great product, and of greater variety, the overall market for science journalism may very substantially rise. As with any better functioning market, perhaps science journalists would make a substantially better living by sending payments to their scientist wholesalers.

In addition to the potential advantages for science journalists, there are potential upsides for the public: the public would then have a means by which to communicate to scientists what they’re curious about. Sure, one might imagine a spike in research on, say, sex, but the interests among laymen are sophisticated and varied, with many niches of scientific interests.

Mechanisms of this kind may also provide new opportunities for getting funds into the hands of scientists. This may increase the total amount of funding directed toward the sciences, but also may provide scientists with greater opportunities at finding funding consistent with their interests, and provide funding for research directions that have no foreseeable application but nevertheless capture the imagination of some portion of the public.

But isn’t this a terrible idea? Science is not supposed to be entrepreneurship.  Scientists need to be independent in their search for truth, rather than trying to appeal to a market. Indeed, I agree (see https://changizi.wordpress.com/category/creativity/), but science today has drifted a long way from the days of the creative lone wolf professor pushing science in the directions he or she sees fit. The typical scientist today has had his or her independence swallowed up by another source: the quest for grant funding. The 21st century scientist spends much of his or her waking life shopping and applying for grants. Scientists may not be driven to satisfy the interests of the public, but they have become slaves to another master: the program director for this or that government funding agency.

The objection that scientists shouldn’t have to sully their independence is a day late and grant dollar short. The question is not whether scientists have to sell themselves, but to whom they must sell themselves. Selling to the populace, through science journalists as retailer, may in principle provide considerable freedom, because there are often widely varying interests among the populace, and tremendous potential for niche science consumers.

I know, I know… science journalists are probably not comfortable thinking of themselves as retailers of anything, buying from wholesalers. But if it could work, perhaps it promises a new day for science journalism, and a new day for the practice of science.

[Addendum: By “journalists”, I really intend to refer to science communicators of all kinds, because ‘journalist’ may be defined in such a way that the science retailers I suggest aren’t journalists at all.]

This first appeared on February 16, 2010, as a feature at ScientificBlogging.com.

Mark Changizi is a professor of cognitive science at Rensselaer Polytechnic Institute, and the author of The Vision Revolution (Benbella Books).


Read Full Post »

Older Posts »