The elevator paradox
The elevator opens -- a TECH the only one in the cab -- they start to get in -- TECH Going down... They back off as the doors close -- ALAN You ever notice how the first elevator is always going the wrong way? CHARLIE Actually, the Elevator Paradox accounts for that -- any one elevator spends most of its time in the larger section of the building, and is more likely to come from that direction when you hit the call button. If you stood here for several hours -- Another elevator OPENS with a DING -- going up -- ALAN Saved by the bell.
The elevator paradox is a fact noted by physicist George Gamow when he had an office on the second floor and physicist Marvin Stern had an office on the sixth floor of a seven-story building. Gamow noticed that about 5/6 of the time, the first elevator to stop on his floor was going down, whereas about the same fraction of time, the first elevator to stop on the sixth floor was going up. This actually makes perfect sense, since five of the six floors 1, 3, 4, 5, 6, and 7 are above the second, and five of the six floors 1, 2, 3, 4, 5, and 7 are below the sixth.
However, the situation takes some unexpected turns if more than one elevator is involved, as discussed by Martin Gardner. More surprisingly, the analysis by Gamow and Stern turns out to be incorrect, as discussed by noted computer scientist and author of the classic Art of Computer Programming series Donald Knuth in 1969.
The crocodile's dilemma
BLAKELY What I need... DAVID You need a hostage... David steps into the elevator -- foot still holding the door open -- DAVID (cont'd) You've got two now. MEGAN David... But David is focused on Blakely -- the two of them staring one another down, guns aimed at each other -- BLAKELY I'll kill her... DAVID Then I'll kill you... and no one wins. Blakely pauses, torn -- MEGAN Let's start with your name -- BLAKELY Stop talking -- let me think -- DAVID Too many things to keep track of, huh? What's going on out there, what's going on in here... (beat) Hard to watch both of us... you'll make a mistake... (softly, slowly) ... you don't want to make a mistake. BLAKELY If I let her go, you let the door go. MEGAN We can't do that... DAVID Yes, we can. (to Blakely) Yes, I will. Blakely considers for a beat... checking David's eyes, which are firm yet calm, reassuring... ... and he PUSHES the woman out of the elevator. A taut beat; David and Blakely at a standoff, guns pointed at each other... BLAKELY I'm ready to die -- right now. Are you? David hesitates a beat -- Megan starts to say something -- -- and David removes his foot from the elevator door. As the doors close...
The crocodile's dilemma is an unsolvable problem in logic dating back to the ancient Greeks and quoted, for example, by German philosopher Carl von Prantl. The dilemma consists of a crocodile capturing a child and promising his father that he will release the child provided that the father can tell in advance what the crocodile is going to do. If the father says that the crocodile will not give the child back, what should the crocodile then do?
Megan, Blakely, and Colby's dilemma is not entirely dissimilar to this paradox.
The Chinese room
CHARLIE There's a classic thought experiment called the Chinese Room... ENTER AUDIENCE VISION Of a COMPUTER sitting in an EMPTY ROOM -- four blank white walls -- CHARLIE (V.O.) (cont'd) In which you have a computer receiving questions in Chinese, which it is programmed to answer, also in Chinese -- ON THE COMPUTER SCREEN -- Chinese characters scrolling through a conversation -- BACK TO SCENE -- CHARLIE (cont'd) -- so perfectly that the questioner thinks he's talking to a person. It's called a Turing Test. (beat) Now picture a man inside the computer... BACK TO AUDIENCE VISION -- Pushing into the computer -- until it becomes an IDENTICAL WHITE ROOM -- this one with a MAN SITTING AT A TABLE -- reading SLIPS OF PAPER WRITTEN IN CHINESE, consulting a HUGE BOOK, and WRITING ANSWERS ON SLIPS OF PAPER. CHARLIE (V.O.) (cont'd) ... he doesn't speak Chinese, but simply formulates the answers by looking up the characters in a rulebook, and writing back the predetermined responses... BACK TO SCENE -- CHARLIE (cont'd) John Searle argued that you cannot get meaning from a blind manipulation of symbols -- others argued that semantics and syntax are not so easily separated and classified.
The term "Chinese Box," which serves as the title of this episode, has several meanings. It is probably most commonly used to refer to a set of nested ornamental boxes and, in this usage, frequently acts as a metaphor for a scenario containing many layers of encapsulation. However, it can also refer to the Turing test-like problem mentioned above.
The Turing test is one of the simplest and perhaps best-known proposals for determining a computer's capability to display intelligence. It was proposed by the father of artificial intelligence, Alan Turing, in 1950. In the Turing test, an impartial (human) judge converses with two parties: a human and a computer (or, in Turing's language, a "machine") that has been programmed to attempt to appear human. If the judge is not able to determine which party is human and which is the computer, then the computer is said to pass the Turing test. (Note that it is not actually required that the computer mimic human speech, only that its responses be indistinguishable from those a human might make. For this reason, the communication is commonly restricted to take place via teletype, instant messaging, etc.). There are of course a number of additional specifications needed to account for the fact that the output of a sophisticated computer algorithm might be comparable to the writing of a young child (or even a non-native speaker of English). It is the latter case that is somewhat similar to the Chinese Room argument mentioned in this scene.
Turing predicted that computers would be able to pass his test by the year 2000. This prediction has proven somewhat optimistic since, as of 2007, no computing device has yet been up to the challenge. In fact, there is an annual competition known as the The Loebner Prize devoted to recognizing the software and hardware that comes closest to passing the Turing test.
John Searle laid out the Chinese Room argument in his paper "Minds, Brains and Programs," published in 1980. Ever since, this argument has been a recurring discussion point in the debate over whether computers can truly think and understand. Interestingly, the conclusion of Searle's Chinese Room argument is true despite the fact that it is based on a fundamental misunderstanding of the nature of computers and of computer programs. A prominent example of the fact that the Chinese Room argument does not hold is the life experience of Helen Keller, who managed to escape her own Chinese room, as discussed by Rapaport. In fact, an overwhelming majority of researchers believe that the Chinese room argument is utterly wrong (as discussed in Stevan Harnad's articles on the subject) and that the more interesting question to the field of cognitive science is why it is wrong.
There are a number of subtleties in making the Chinese Room argument. In particular, since Chinese is a pictographic (not a phonetic) language, if you don't speak it, you don't know how to write down the characters corresponding to what you just heard. (And even if you speak it fluently, you might still not be able to write it; most Chinese-speaking foreigners can write much less than they can speak.) So the Chinese Room argument requires Chinese characters (i.e., a text-only channel) as input for this reason. (Interestingly, even so, Chinese dictionaries are ordered based on the number of "strokes" in each character, and assessing what constitutes a "stroke" is something even native Chinese speakers do not always get "right.")
Chomp
CHARLIE I do seem to get more respect from the people who didn't give me noogies when I was six. (arranging the cookies) There's a classic math problem called "Chomp" -- -- the cookies in a grid, with one upside down in the lower corner -- he pulls away a column -- then another -- CHARLIE (cont'd) -- in which two players take turns removing cookies -- the object being to avoid taking the top left corner -- the poison cookie. You can mathematically prove that the first player will always win -- but you cannot mathematically determine first moves that guarantee a win. DON (thoughtful) David made the first move -- getting in the elevator. CHARLIE And you became the third player -- screwed up the winning strategy. (beat) Chaos Theory holds that outcome is sensitive to initial conditions. We must restore the decision making process to the man who started it. DON Is this that Chinese Room thing again? CHARLIE It's a strategy I arrived at via the Chinese Room... (beat) Don, if it were you in there -- who would you want making the next move?
Chomp is a two-player game played on a rectangular "chocolate bar." Players take turns removing rectangular blocks from the board together with any squares of chocolate lying below and to the right of the block they removed. The game is lost by the player who takes the top left block.
In the language of game theory, chomp is an impartial game with perfect information. Here, "impartial game" means a game in which the possible moves are the same for each player in any position (and not that the rules of the game are set up such that any player can win). In fact, it turns out that for any starting position bigger than 1x1 in the game of chomp, the first player can always win by following optimal play. Chomp therefore belongs to the class of games known as misère-form games. For most impartial games, misère-form games are much harder to analyze than so-called normal-form games, in which the last player to make a move wins.
References
Chen, H.-Y. and Leung, K.-T. "Rotating States of Self-Propelling Particles in Two Dimensions." Phys. Rev. E 73, 056107, 2006. http://dx.doi.org/10.1103/PhysRevE.73.056107
Copeland, B.J. "Hypercomputation in the Chinese Room." In Proceedings of the Third International Conference on Unconventional Models of Computation. London: Springer-Verlag, pp. 15-26, 2002.
Copeland, B.J. "Turing's O-Machines, Searle, Penrose and the Brain." Analysis 58, 128-138, 1998.
Damper, R.I. "The Logic of Searle's Chinese Room Argument." Mind Mach. 16, 163-183, 2006.
Gavalas, D. "Searle's Chinese Room to the Mathematics Classroom: Technical and Cognitive Mathematics." Stud. Philos. Educ. 26, 127-146, 2007.
Gamow, G. and Stern, M. Puzzle Math. New York: Viking, 1958.
Gardner, M. "Elevators." Ch. 10 in Knotted Doughnuts and Other Mathematical Entertainments. New York: W. H. Freeman, pp. 123-132, 1986.
Harnad, S. "What's Wrong and Right About Searle's Chinese Room Argument." In Essays on Searle's Chinese Room Argument (Ed. M. Bishop and J. Preston). Oxford University Press, 2001.
Harnad, S. "Searle's Chinese Room Argument." In Encyclopedia of Philosophy. Macmillan, 2005.
Kleene, S. C. Introduction to Metamathematics. Princeton, NJ: Van Nostrand, 1964.
Knuth, D.~E. "The Gamow-Stern Elevator Problem." J. Recr. Math. 2, 131-137, 1969.
Knuth, D.~E. The Art of Computer Programming, Vol. 1: Fundamental Algorithms, 3rd ed. Reading, MA: Addison-Wesley, 1997.
Knuth, D.~E. The Art of Computer Programming, Vol. 2: Seminumerical Algorithms, 3rd ed. Reading, MA: Addison-Wesley, 1998.
Knuth, D.~E. The Art of Computer Programming, Vol. 3: Sorting and Searching, 2nd ed. Reading, MA: Addison-Wesley, 1973.
Rapaport, W. J. "How Helen Keller Used Syntactic Semantics to Escape from a Chinese Room." Minds & Machines 16, 381-436, 2006.
Searle, J. "Minds, Brains and Programs." Behavioral and Brain Sciences 3, 417-457, 1980.
Toner, J. and Tu, Y. "Flocks, Herds, and Schools: A Quantitative Theory of Flocking." 1998.
Turing, A. M. "Computing Machinery and Intelligence." Mind 59, 433-460, 1950.
von Prantl, C. Geschichte der Logik im Abendlande, Vol. 1. Leipzig, Germany: S. Hirzel, 1855.
Wikipedia.org. http://en.wikipedia.org/wiki/Chomp