Archive for the ‘Uncategorized’ Category
A word is vague if it has borderline cases. Yul Brynner (the lead in “The King and I”) is definitely bald, I am (at the time of this writing) definitely not, and there are many people who seem to be neither. These people are in the “borderline region” of ‘bald’, and this phenomenon is central to vagueness.
Nearly every word in natural language is vague, from ‘person’and ‘coercion’ in ethics, ‘object’ and ‘red’ in physical science, ‘dog’ and ‘male’ in biology, to ‘chair’ and ‘plaid’ in interior decorating.
Vagueness is the rule, not the exception. Pick any natural language word you like, and you will almost surely be able to concoct a case — perhaps an imaginary case — where it is unclear to you whether or not the word applies.
Take ‘book’, for example. “The Bible” is definitely a book, and a light bulb is definitely not. Is a pamphlet a book? If you dipped a book in acid and burned off all the ink, would it still be a book? If I write a book in tiny script on the back of a turtle, is the turtle’s back a book?
We have no idea how to answer such questions. The fact that such questions appear to have no determinate answer is roughly what we mean when we say that ‘book’ is vague.
And vagueness is intimately related to the ancient sorites paradox, where from seemingly true premises that (i) a thousand grains of sand makes a heap, and (ii) if n+1 grains of sand make a heap, then n make a heap, one can derive the false conclusion that one grain of sand makes a heap.
Is vagueness a problem with our language, or our brains?
Or, could it be that vagueness is in some way necessary…
When you or I judge whether or not a word applies to an object, we are (in some abstract sense) running a program in the head.
The job of each of these programs (one for each word) is to output YES when input with an object to which the word applies, and to output NO when input with an object to which the word does not apply.
That sounds simple enough! But why, then, do we have vagueness? With programs like this in our head, we’d always get a clear YES or NO answer.
But it isn’t quite so simple.
Some of these “meaning” programs, when asked about some object, will refuse to respond. Instead of responding with a YES or NO, the program will just keep running on and on, until eventually you must give up on it and conclude that the object does not seem to clearly fit, nor clearly not fit.
Our programs in the head for telling us what words mean have “holes” in them. Our concepts have holes. And when a program for some word fails to respond with an answer — when the hole is “hit” — we see that the concept is actually vague.
Why, though, is it so difficult to have programs in the head that answer YES or NO when input with any object? Why should our programs have these holes?
Holes are an inevitability for us because they are an inevitability for any computing device, us included.
The problem is called the Always-Halting Problem. Some programs have inputs leading them into infinite loops. One doesn’t want one’s program to do that. One wants it to halt, and to do so on every possible input. It would be nice to have a program that sucks in programs and checks to see whether they have an infinite loop inside them. But the Always-Halting Problem states that there can be no such infinite-loop checking program. Checking that it is a program always halts is not generally possible.
That’s why programs have holes in them — because it’s computationally impossible to get rid of them all.
And that’s why our own programs in the head have holes in them. That’s why our concepts have holes, or borderline cases where the concept neither clearly applies nor clearly fails to apply.
Furthermore, notice a second feature of vagueness: Not only is there no clear boundary between where the concept applies and does not, but there are no clear boundaries to the boundary region.
We do not find ourselves saying, “84 grains of sand is a heap, 83 grains is a heap, but 82 grains is neither heap nor non-heap.”
This facet of vagueness — which is called “higher-order vagueness” — is not only something we have to deal with, but is also something which any computational device must contend with.
If 82 grains is in the borderline region of ‘heap’, then it is not because the program-in-the-head said “Oh, that’s a borderline case.” Rather, it is a borderline case because the program failed to halt at all.
And when something fails to halt, you can’t be sure it won’t halt. Perhaps it will eventually halt, later.
The problem here is called the Halting Problem, a simpler problem than the Always-Halting Problem mentioned earlier. The issue now is simply whether a given program will halt on a given input (whereas the “Always” version concerned whether a given program will halt on everyinput).
And this problem also is not generally solvable by any computational device. When you get to 82 grains from 83, your program in the head doesn’t respond at all, but you don’t know it won’t ever respond.
Your ability to see the boundary of the borderline region is itself fuzzy.
Our concepts not only have holes in them, but unseeable holes. …in the sense that exactly where the borders of the holes are is unclear.
And these aren’t quirks of our brains, but necessary consequences of any computational creature — man or machine — having concepts.
This originally appeared August 19, 2010, at Science 2.0. (See the comments there for some good discussion.)
Mark Changizi is Director of Human Cognition at 2AI, and the author of The Vision Revolution (Benbella Books) and the upcoming book Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella Books). He is working on his fourth book at the moment, tentatively titled Making Faces, about emotions and facial expressions.
Have a talent and enjoyment for inflicting prescribed doses of pain? Your dream job awaits. (Biology undergraduate required.) Contact: 555-8428
This “harnessing” strategy is just the tip of the iceberg – our modern civilization is, in myriad ways, shaped to fit our fundamentally uncivilized selves. Culture has given us clothes that fit our body shapes, color patterns that fit our innate color senses, lexicons that fit our brains, religions that fit our aspirations, and chairs that fit our butts.
But there is one blaring gap in how we have been harnessed for modernity, a gap that, if addressed, would lead to a revolution in safety and well-being for humankind.
What’s missing is pain.
Pain is crucial, of course, because it keeps us safe, and prevents us from engaging in acts that injure or slice off parts of ourselves. Although wishing for a world without pain sounds initially alluring, one quickly realizes that such a world would be hell – it would be a world of the walking bruised and hideously injured (unless you’re into that). Those who lack pain don’t last long. And even if they avoid catching on fire or bleeding to death, they often succumb to death by a thousand pricks (e.g., they don’t shift their body weight as the rest of us do when they sit too long in one position, and this leads over time to circulatory damage).
Pain is designed to be elicited before injury actually occurs, with the hope that it prevents injury altogether. (E.g., see Why Does Light Make Headaches Worse?) Pain is evolutionarily designed to cause us to say, “Ouch!”, rather than, “Darn, I needed that appendage!”
More importantly for our purposes here, pain is rigged to be elicited in scenarios that would have been dangerous for our ancestors out in nature. A great example of what happens to animals who encounter injurious situations they have no pain mechanisms to deter them from is when natural gas accumulates in low spots. One animal gets there and dies. Another animal sees an easy meal, and also dies. Soon there are many dozens of dead animals there, lured to their death, with life-snuffing injuries sneaking up on them without the benefit of warning pain.
And there’s your problem! We no longer live in the nature that shaped our bodies and brains, and the dangerous scenarios we now face aren’t the same as those our ancestors faced. Electricity, ban saws, nail guns, stove tops, toasters perched next to bathtubs, and countless other modern dangers exist today, dangers that we’re not designed to have safety-ensuring pain to protect us from (until it’s too late).
What we need are technologies that inflict “smart pain,” pain not only designed to go off at signs of modern dangers, but designed to be painful in the right way, on the right body part, so as to optimally alert us to the acute danger.
Just to throw out a few examples…
- Your car rigged to shock you on your left or right side if drive your car within several inches of an obstacle on your car’s left or right, respectively.
- Your computer set to shine a painfully bright red light if you are about to click on a suspicious link.
- A wearable device with a video sensor that detects the likelihood that the person you’re picking up at a bar has an STD, and then causes severe itching until you flee the bar.
You’re beginning to get the idea, and I hope you can see that the ideas are endless. What I would like to see are your own suggestions for the future of pain engineering, and to a world where all sadists are employed.
This first appeared on May 6, 2010, as a feature at bodyinmind.au
Already well-known for its short character length limits, in a press conference scheduled for later today, Twitter will announce that it will severely shorten its allowable “tweets”.
“I’m frankly amazed at all the crap people fit into their tweets,” said Twitters’ founder Jack Dorsey by phone with me yesterday. “By shortening tweets to 20 characters, they’ll be able to put their bit.ly link, and still have about seven characters left over for a snappy headline.”
Prior to this decision, “twitterers” could put in up to 140 characters, allowing Twitter tripe such as, “Recall the Tiger Woods apology… Is it me, or did it look like he was having oral sex during the press conference?” Asked whether shortening tweets by two-seventh can hamper speech on Twitter, Dorsey quickly replied, “You can say the same thing with: ‘Tiger got oral on TV’.”
Twitter rights groups have promised a fight, with its spokesperson Doug Degas tweeting the following announcement to his 31,794 followers: “#Sign #our #petition #to #keep #us #blithering #on #twitter http://bit.ly/c1PsXn @mrsaki @sextmessage #dontyouhatewhen (This is an RT must!)”
Filed by news staff, April 1, 2010
[Note added April 28, 2010: Now that April 1 is long past, I probably should emphasize that this was written on April 1.]