Solutions


Solution to Sweet Tooth

  1. How does Jeremy maximize the amount of cake he gets, given these rules? How much will he get?

Jeremy cuts the first cake in the fractions f and 1-f, where f is at least 1/2. Then if Marie chooses first, Jeremy will get 1 1/4 cakes from the last two by the argument of the warm-up. If Marie chooses second, Jeremy will get f and then will divide the last two cakes in half. So, f + 1/2 + 1/2 = (1 - f) + 1 1/4. That is, f + 1 = 2 1/4 - f. 2f = 1 1/4. That is f = 5/8. Therefore, Jeremy will receive 1 5/8 cakes and Marie will receive 1 3/8 cakes.

  1. Suppose there were seven cakes and Marie got the first choice for six of the seven cakes. Who has the advantage? By how much?

There is an inductive pattern. If for k cakes where Marie gets the first choice in all but one, Jeremy still has an advantage A (that is, Jeremy would receive A more cake than Marie among those k), then Jeremy will still have an advantage for k+1 cakes. Here is why: Jeremy, knowing that he will have an advantage of A if Marie chooses first for the first cake, cuts the first piece into two pieces 1/2 + A/4 and 1/2 - A/4. If Marie chooses the piece having 1/2 + A/4, then Jeremy will suffer a disadvantage of A/2 for that first cake, but will gain an advantage of A for the remaining k cakes, so have a net advantage of A/2. By contrast, if Marie chooses to go second for the first cake, then Jeremy will cut the remaining cakes in half. Again he will have an overall advantage of A/2. Thus, Jeremy’s advantages from 1/2 for two cakes, to 1/4 for three cakes, to 1/8 for four cakes, to 1/64 for seven cakes.

  1. Is there any way to ensure that each child receives the same amount of cake, assuming Jeremy always cuts?

Yes, simply make it so Marie gets the first choice every time. That will force Jeremy to cut evenly every time.

Solution to Byzantine Bettors

  1. Now suppose there are only three advisors and only one of them always tells the truth. Again, you can make three even money bets. You start with $100. How much can you guarantee to win?

If the vote is two against one, then bet x on the majority. If you lose, the advisor in the minority must be the truth-teller so you can bet everything on the next two bets by following the advice of the truth-teller. This gives you 4 × (100 - x). If you win by following the majority the first time, you should disregard the advisor in the minority and you’ll have 100+x after the first bet. In the next bet, if the remaining two agree, you bet everything because one must always tell the truth. But if not, you’ll find out who is telling the truth by betting nothing. On the last bet, you follow the truth teller and end up with at least 2 × (100 + x).

Setting 4 × (100 - x) = 2 × (100 + x), you conclude that 6x = 200 or x = 33 1/3. Plugging that into either equation corresponds to handling both cases when the vote is two against one. Thus, in either case, you end up with 2 2/3 of what you start with - a total of $266.66.

  1. What can you guarantee to win in four bets, with four advisors, three of whom can lie at will, and one of whom must tell the truth at least three out of four times?

In this case, unfortunately, you cannot guarantee anything. The reason is that your “advisors” can arrange it so that you never have a reason to choose one outcome over the other. Call the advisors A, B, C, and D.

  • Bet 1: AB advise 0; CD advise 1.

  • Outcome: 1, so A and B have each lied once.

  • Bet 2: AC advise 0; BD advise 1.

  • Outcome: 1, so A has lied twice so is a definite liar.

  • B and C have each lied once.

  • D has not yet lied.

  • Bet 3: BD advise 0; AC advise 1.

  • Outcome: 1, so B is a definite liar. C and D have each lied once.

  • Bet 4: AC advise 0 and BD advise 1.

Notice that, in bet 2, if you had bet on 1 but the outcome had been 0 (because the advisors had changed the written result), then B would be out, A and C would have lied once, and C would have never lied. This is symmetrical to what happened with just a rearrangement of labels. So, you would have learned no more if the outcome had been 0. Similarly, in the third bet, if the outcome were 0, then D and B would both be possible partial truth tellers in bet 4. Thus, no series of outcomes ever teaches you whom to trust. The advisors can arrange it so that every bet you make is a loser.

  1. Under the reliability conditions of one partial truth teller who tells the truth four out of five times and three liars at will, but assuming you have five bets, can you guarantee to end with at least $150?

Even with five bets, the advisors can keep you from winning much. Their strategy starts similarly to their strategy when you had only four bets.

  • Bet 1: AB advise 0; CD advise 1.

  • Outcome: 1, so A and B have each lied once.

  • Bet 2: AC advise 0; BD advise 1.

  • Outcome: 1, so A has lied twice so is a definite liar.

  • B and C have each lied once.

  • D has not yet lied.

  • Bet 3: BC advise 0; AD advise 1.

Now, we must do a case analysis.

Case 1: Suppose you bet nothing on bet 3. Suppose the outcome is 1. Now you know that D is the partial truth teller because everyone else has lied twice. However, D may still lie once. If D advises 1 in bet 4 and you bet $x on 1, then if x < 50 and D tells the truth in bet 4, you cannot bet anything on bet 5 because D could lie then. If x 50, then D could lie in bet 4 and you will never do better than regain your original capital. So, if you bet nothing, you cannot attain $150.

Case 2: Suppose you bet something on 0 in bet 3. Then your situation is as in case 1, only worse.

Case 3: Suppose you bet $x on 1 in bet 3. If x < $12.50, then suppose further the outcome is in fact 1 and you bet $y in bet 4 on the outcome offered by D and x + y < 50. Then, if you win in bet 4, you cannot bet in bet 5 (because D may lie). If x + y > 50, then y > $37.50, so if you lose bet 4, you will have less than $75, so cannot attain $150 in bet 5. Therefore x $12.50.

Now suppose that you bet $x on 1 in bet 3, but the outcome is 0. Then you have $100 - x and you know that B, C, and D have all lied once. In bet 4, two of them must advise one outcome and one must advise the other. Let’s say you bet $z on the outcome suggested by the majority. Then x + z 25; otherwise losing in bets 3 and 4 will leave you less than $75, making it impossible to attain $150 in bet 5. But this implies z $12.50 and therefore if you win in bet 4, you will have at most $100 and D can still lie in bet 5. If you bet with the minority in bet 4, the majority could be telling the truth and then there would remain two advisors who have lied only once.

Therefore, you cannot guarantee to attain $150. In fact, a refinement of this argument shows you cannot even attain $134. Ivan Rezanka has argued that $133.33 can be guaranteed, but a short elegant proof is still missing.

Solution to A Touch of Luck

  1. Bob, Carol, and Alice play. Alice has 51 units, whereas Bob and Carol have 50 each. Bob must state his bet first, then Carol, and then Alice. Bob and Carol collude to share the reward if either wins. How can Bob and Carol maximize the probability that at least one of them wins if there is just one coin flip left?

Bob bets 50 on heads and Carol bets 50 on tails. If Alice bets 49 or less, then she can’t possibly win because even if she guesses the correct coin orientation, her 100 will equal the 100 of one of Bob or Carol. If Alice bets 50 or more on heads, then she wins on heads and Carol wins on tails. So, Bob and Carol can guarantee to have a 1/2 chance to win.

  1. Does the result change if Alice must state her bet first?

If Alice bets 48 or less, then Bob and Carol bet as above. One of them will surely win. But if Alice bets at least 50, Bob and Carol can do no better than before. So, the result stays the same.

  1. Suppose Bob has 51 units and Alice 50. There are two coin flips left. Bob bets first for the penultimate flip. Alice bets first for the last flip. Does Bob have more than a 1/2 chance of winning? If so, how much more?

Bob will win with probability 3/4. In the penultimate round, he bets nothing. No matter what Alice bets, she cannot exceed 100 units. If Alice loses in the penultimate round, then since she must state her bet first in the last round, Bob just bets as she does in that round and is guaranteed to win. If Alice wins in the penultimate round, then, in the last round, Bob simply bets all his units on the coin face opposite to the one Alice chooses. (If Alice doesn’t bet in the last round, Bob can bet everything on, say, heads.) So Bob will win 3/4 of the time.

  1. Suppose Bob has 51 units and Alice 50. Again, there are two coin flips left. This time, Alice bets first for the penultimate flip. Bob bets first for the last flip. Does Bob have more than a 1/2 chance of winning? If so, how much more?

Suppose Alice bets 2 on heads in the penultimate round. If Bob bets nothing or bets on tails and the penultimate flip lands on heads, then Alice wins for sure by using the strategy from the warm-up. If Bob bets all 51 on heads and the result is tails, then again Alice is sure to win on the last flip. If Bob bets anything on heads and the result is heads, then Bob has at most 102 and Alice has 52. In that case, on the last flip, Alice bets everything on the opposite face to Bob’s and she will win 1/2 the time regardless of what Bob bets. So, Alice wins at least 1/2 the time and will win more often if Bob plays foolishly. A good move for Bob would be simply to bet the same amount as Alice on the same face (so 2 on heads if Alice bets 2 on heads) in the penultimate flip. He can then bet nothing on the last flip and be sure to win with probability 1/2.

  1. Suppose Bob has 51 units and Alice 50. Again, there are two coin flips left. Again, Alice states her bet first in the penultimate round and Bob states his bet first in the final one. This time, Bob announces that he will bet 20 in the penultimate round, though he will wait to see Alice’s bet before saying whether he will bet on heads or tails. Can Alice arrange to have more than a 1/2 chance to win?

Yes. Alice bets nothing in the penultimate round. If Bob loses, Alice wins for sure in the final round using the strategy of the second warm-up. If Bob wins, Alice bets everything on the coin face opposite to the one Bob chooses in the final round. She will win with probability 3/4.

  1. Bob, Alice, Rino, and Juliana have 100 units each and there are two flips left. Each person is out to win - no coalitions. Bob and Alice have bet 100 on heads. Rino has bet 100 on tails. Juliana must now state her bet, knowing that she will state her bet first on the last flip. What are her chances of winning if she bets 90 on either heads or tails? What should she bet?

If Juliana bets 90, she has no hope of winning. If she bets 90 on heads and the outcome is heads, then Bob and Alice will each have 200 compared to her 180 after the first flip. If she then bets x on, say heads, on the last flip, she knows that (i) Bob and Alice must split their bets on that last flip (because ties do them no good) and (ii) one of them can arrange to bet x or more on heads as well. Covered in this way, she has no chance of winning. If Juliana bets 90 on tails in the first flip and the outcome is tails, then Rino will cover her on the last flip.

So, Juliana should bet 100 on tails in the penultimate flip. If the outcome is tails, then she and Rino will each have a 1/2 chance to win in the last flip. Rino won’t bet as she does, again because ties do him no good. So, overall Juliana can arrange to have a 1/4 chance of winning, but she must bet high.

John Trono and Tom Rokicki lent their always-helpful insight to these solutions.

Note 

All questions relating to this Touch of Luck puzzle mentioned so far have paper-and-pencil solutions. But the general situation having multiple players, multiple rounds, and different initial wealths is still elusive. The person who can solve it would have to mix, I think, expertise in game theory with algorithmic insight.

Solution to Information Gain

  1. Is it possible for Jordan to design a protocol to ensure that each of his friends will raise the correct number of fingers? If so, explain it. If not, does Jordan have a high probability of winning?

Jordan’s team can win for sure. This time, by prior agreement, all the mathematicians will know: Ariana represents 0 if she receives a blue ticket and 5 if she receives red; Bob represents 1 for blue and 6 for red; Caroline represents 2 for blue and 7 for red; David represents 3 for blue and 8 for red; and Ellen represents 4 for blue and 9 for red.

Jordan adds up all the numbers on all the hats. He divides them by 10 and takes the remainder. For example, suppose the numbers are 3, 2, 7, 9, and 5. He adds them up and gets 26. The remainder after dividing by 10 is 6.

He sends a ticket reflecting the remainder value to the appropriate person. In this example he would send a red ticket to Bob to represent 6.

Now each mathematician adds up the other numbers and then derives his or her own number as the unique value to make the remainder work out.

In this example, Caroline would have a 7 on her hat. She would see 3, 2, 9, and 5. Adding those up, she would get 19. She knows the final remainder has to be 6, so she would hold up 7 fingers.

Solution to Reach for the Sky!

  1. What would be a good strategy for the $100,000 Reach for the Sky! game, where you have three captures? Using that strategy, what would be your chance of winning?

Following the idea of the sultan’s daughters problem, we will express our protocols in terms of three numbers in increasing order: n1, n2, and n3. The idea is to reject the paper values in the first n1 balls, then choose the first ball whose paper value is greater than the highest of those in the first n1 balls. Call the ball number of the first capture b1 (so b1 > n1).

If b1 > n2, then make the second capture at a ball whose paper value is a greater number than any paper value seen in the first b1 balls. Otherwise (when b1 n2), reject all paper values until the first n2 balls (altogether) have been seen and then make the second capture occur at a ball whose paper value is greater than any seen so far in the first n2. Call the position of the second capture b2.

If b2 > n3, then choose, for the third capture, the paper value greater than any seen before position b2. Otherwise, reject until position n3 and then pick the next one better than any seen before the first n3. Call the position of the third capture b3.

Of course, if the highest value happens to be in the first n1 balls, then you will never choose anything, but it wouldn’t help to do so anyway. That suggests that it is useful to make n1 rather small.

For 100 balls in total and 3 captures, you win roughly 68 percent of the time if you set n1, n2, and n3 to be 14, 32, and 64, respectively.

Consider, for example, the following sequence of 100 values where each dollar amount written by your adversary has been replaced by its rank. That is, the highest dollar figure is replaced by 99 and the lowest by 0.

 78 3 80 90 25 95 51 27 57 40 65 48 55 72 26 73 54 31 15 2 89 61 97 98 8 50 38 18 88 52 4 42 68 16 62 9 94 99 20 28 56 58 76 93 10 96 63 35 81 91 66 11 30 5 0 24 82 29 41 12 47 71 44 92 43 32 85 84 7 59 60 86 69 21 83 79 64 67 74 37 1 46 22 19 33 39 87 45 36 13 23 75 34 70 53 49 77 17 6 14

The b1 position value in this case would be 23 where the value 97 is found, because 97 is the first value larger than 95, which is the largest of the first 14 numbers. The b2 value would be 38 where 99 is found and the third capture then is irrelevant.

The bottom line: Take the bet.

There may be an elegant analytical way to solve this problem as there is for the original sultan’s daughters problem, but the puzzle is so amenable to programming, it didn’t seem worth it. Simply choose ascending triples and then, for each triple, generate 10,000 random permutations of the numbers between 0 and 99 (the ranks) and see how often you win. Such a strategy is not limited to triples, obviously.

  1. How would your answer change if there were 1,000 lottery balls?

For 1,000 balls with three captures, you win roughly 66 percent of the time if you set n1, n2, and n3 to be 141, 317, and 650 respectively. So, you should still take the bet. Isn’t it amazing that the odds stay so good?

Solution to Pork Politics

  1. Which coalitions might form?

A and C will form a coalition together. C prefers A to any other coalition, because C will get the most money in that case. A also prefers C.

  1. How strong a majority should B prefer (51%, 67%, or 75%) in order for B to receive as much money as possible?

67% would be best for B. Then B would be part of the A, B, E coalition and B would get the fraction 25/70 of the money. A also would get more with this coalition than with any other winning coalition of which A would be a part. If the cutoff were 75%, A would team up with C, D, and E. B would get nothing.

Solution to Social Games

  1. How would punishment have to change to eliminate cheats-cheats as a Nash equilibrium?

If punishment were to increase to -9, then the upper right corner would give Alice (4 × 0.9) + ((-9) × 0.1). The bottom right corner would give an expected gain of 0. That yields the matrix:

Open table as spreadsheet
 

Alice Honest

Alice Cheats

Bob Honest

3, 3

0, 2.7

Bob Cheats

2.7, 0

0, 0

In this case, if Alice cheats, Bob receives the same gain whether he cheats or not. So, the lower right and the upper right are the same from Bob’s point of view. If Bob decides not to cheat, then the game state will move to the upper right and from there to the upper left. To encourage this, you could make the punishment even slightly more negative than -9 (e.g., -9.01).

  1. If you are a public policy maker and you want to limit the punishment for cheating to -5, how much must you increase the probability of catching criminals to maintain the upper left as the only Nash equilibrium?

Suppose punishment is reduced to -5 and the probability of punishment is raised to 17 percent. Then the expected gain for Alice from cheating is (0.83 × 1) + (0.17 × -5) = -0.02 in the bottom right state. In the upper right corner, the expected gain will be (0.83 × 4) + (0.17 × -5) = 2.47.

Open table as spreadsheet
 

Alice Honest

Alice Cheats

Bob Honest

3, 3

0, 2.47

Bob Cheats

2.47, 0

-0.02, -0.02

So the only Nash equilibrium is the upper left.

  1. Assuming a 10 percent likelihood of being caught, is there a punishment value that would cause the only Nash equilibrium to be the upper right state (Alice commits crime but Bob does not)?

Under the assumption of a 10 percent likelihood of being caught, let’s say the punishment is -9.01. Then the lower right has an effective value of (-9.01 × 0.1) + (1 × 0.9) = -0.001. The upper right has (for Alice) a value of (-9.01 × 0.1) + (4 × 0.9) = 2.699, which we’ll approximate to 2.7.

This yields a game matrix of:

Open table as spreadsheet
 

Alice Honest

Alice Thief

Bob Honest

5, 2

0, 2.7

Bob Thief

2.7, 0

-0.001, -0.001

So, Alice will take the matrix to the upper right corner. Bob can rail about immorality, but Alice sees it all as a game.

  1. Still assuming a 10 percent likelihood of being caught, what must the punishment be to force Alice to stay honest?

Recall the game matrix relevant to this question:

Open table as spreadsheet
 

Alice Honest

Alice Thief

Bob Honest

5, 2

0, 4

Bob Thief

4, 0

1, 1

If punishment increases to, say, -17 (which might represent cutting off the hand of a thief who steals bread), then the game matrix becomes:

Open table as spreadsheet
 

Alice Honest

Alice Thief

Bob Honest

5, 2

0, 1.9

Bob Thief

1.9, 0

-0.8, -0.8

The only Nash equilibrium is the upper left.

Solution to Escape Management

  1. The police want to control T’s direction at only five intersections. They know that T is extremely motivated to escape. How long (in terms of single block steps) at most will it take T to arrive at either the northern or southern border?

Figure 1-14 shows most of the northern half. The situation is symmetric for the south. The police are represented by dashed arrows and the thief by solid arrows. The earliest T can escape is after traveling 22 blocks (which we call steps). This is the best possible for P because each southerly step that P forces requires for compensation at most one east-or-west plus one northerly step by T. Here is the calculation. T needs nine steps even without obstruction. Add to that one by P to change directions to east-or-west plus four southerly steps forced by P, each of which requires two compensatory steps by T. So the best possible strategy for P is 9 + 1 + (4 × 3) = 22.

image from book
Figure 1-14: The northern half of the city T wants to escape from. The police are represented by dashed arrows and the thief by solid arrows. T leaves after 22 block steps.

  1. Suppose that the police want to prevent T from reaching the northern or southern borders. Therefore the police plan to control T’s direction m out of every n times for some m < n. What should m and n be to minimize the fraction of time P forces T to change directions while still preventing T from escaping?

The police can use a strategy more or less in the spirit of the solution to question 1. The idea is to let the thief go eight steps (to one step away from the northern border) and then to force the thief to go east (or west) and then south just as in the figure above. The thief then moves east or west and then the police force a move south again. The police eventually force the thief all the way down to row 1, but facing away from the northern border. This requires eight steps by the police and 15 steps by the thief. So the police intervene 8 times out of every 23.

  1. How does your answer change if T can escape by reaching any border, including the eastern and western ones?

If the police intervene two times out of every four, then they can force the thief to stay in a six by six square around the center. Each time the thief goes, say, east and then north, the police force the thief to go west and then south. If the thief goes in a single direction, then the police force the thief to stay within the central six by six square. This plan clearly lacks subtlety, but it works. I see no better general solution, but maybe a clever reader will find one. If so, please let me know.

These solutions are due to Ivan Rezanka.

Solution to Flu Math

  1. As long as the government is in a compelling mode, what fraction should be required to take the vaccine in order to minimize the overall average risk?

If a fraction f take the flu shot, then the death toll is 5f + 10(1-f)(1-f) = 5f + 10f2 - 20f + 10. That is, 10f2 - 15f + 10. This is minimized when 20f = 15 or f = 0.75. At that point, we’d have a death toll of only (5 × 0.75) + (2.5 × 0.25) = 4.375%. Notice, however, that those who don’t take the vaccine have a probability of only 2.5% of dying whereas the ones who do take the flu shot have a 5% chance of dying.

  1. Under these conditions (no compulsion, full belief in government risk figures, selfishness), what percentage of people will take the flu shot?

Exactly half (or maybe one less than half, given the bias towards inaction). Here is why. Take the ith person. If half have already taken the flu shot, then there is no reason to take it. Person i will die with a 5% probability if he or she takes the shot and will die with a 5% probability or less if he or she doesn’t take the shot. If less than half have taken it and there are just enough people left for half to take it, then person i will take it for sure, because if he or she doesn’t, the chances of living will be less than if he or she took the shot. We call this the “must-vaccinate” point.

  1. Suppose 25% of the population will not take the vaccine under any circumstances. Nobody else knows beforehand who they are. Then can you determine what percentage of people will in fact take the vaccine under the same assumptions?

In this case, at most 50% will take the vaccine. Once 50% have taken the vaccine, nobody (neither a refuser nor a normal citizen) will take it. If fewer than 50% have taken the vaccine and we are at the must-vaccinate point, then a non-refuser will take the vaccine, but a refuser of course will not.

  1. Again, the government will compel nobody. To which percentage should the risk of disease be inflated to achieve the vaccination level you determined in your answer to question 1?

The government wants f = 0.75 from the answer to 1. So if the initial perceived risk is R, each person will take the vaccine until R(1-f) = R 0.25 = 5%. So R = 20%. That is, if everyone believes the initial unvaccinated risk is 20%, then 75% of the people will take the vaccine.

Solution to Whipping Ice

  1. Does Indy need more poles than the two shown by the dark circles in Figure 1-4 to go from the bottom to the top?

This solution requires two connections and disconnections to the same pole, as shown in Figure 1-15.

image from book
Figure 1-15: Indy uses the first pole both to straighten out and then eventually to curve in.

  1. There are several L-shaped barriers and several objects (at the Xs) that Indy wants to collect in a game called Treasure Skater. What is the minimum number of poles he will need and where should he place them?

Four poles are enough. Figure 1-16 shows the basic solution. Notice that poles can be used many times.

image from book
Figure 1-16: This is the route Indy should take starting from the bottom to win at Treasure Skater. The numbers indicate the order in which Indy visits the Xs. Notice that he does a sharp turn using the lower left pole and then uses that pole again to curve up to X1. He then uses the lower left pole and the lower right pole twice to swing to 2 and 3.

The tricky part is near the X labeled 4 (see Figure 1-17).

image from book
Figure 1-17: The difficult part of the solution is to change directions as Indy descends between X2 and X4. He has to use a distant pole.

These solutions are due to TJ Takei.

Solution to Optimal Jargon

  1. Suppose that in addition to the frequencies given in the warm-up, you know that in every message, B is always followed by A, and D is always followed by C. Could you arrive at a shorter encoding of the message?

In that case, we can form a code like this:

 A–1 BA–01 C–000 DC–001

There will be 15 As by themselves with a cost of 15. There will be 15 BAs with a cost of 30. There will be 5 DCs with a cost of 15. There will be 5 Cs with a cost of 15.

So the total cost is only 75.

  1. Suppose that in addition to the frequencies given in the warm-up, you know that in every message, D is always followed by C, C is always followed by B, and B is always followed by A. Could you arrive at an even shorter encoding of a 60-character message? Which phrases should be rendered as jargon?

This enables us to form the encoding:

 A–1 BA–01 CBA–000 DCBA–001

In this case, there are 5 DCBAs with a cost of 15; 5 additional CBAs (i.e., not included in the DCBAs) with a cost of 15; 5 additional BAs with a cost of 10; and 15 additional As with a cost of 15. This gives a total of only 55 bits, less than one bit per character. The jargon terms should correspond to the phrases DCBA, CBA, BA, and A.

  1. Suppose that in addition to the frequencies given in the warm-up, you know that in every message, D is always followed by C, C is always followed by B, and B is followed by A 13 out of 15 times? What is the shortest encoding you could find of a 60-character message?

We’ll try two encodings. The first encoding includes the same phrases as in the solution to question 2, and then adds long codewords for the exceptional cases.

 A–1 BA–01 CBA–000 DCBA–00101 DCB–00100 CB–00110 B–00111

For this first encoding, let’s consider two cases. In the first case, assume that there will be 5 DCBAs with a cost of 25; 5 additional CBAs (i.e., not included in the DCBAs) with a cost of 15; at least three additional BAs with a cost of 6; two Bs with a cost of 10; and 15 additional As with a cost of 15. In the second case, assume instead that, say, two of the CBAs were CBs instead, but that there were five BAs. The cost of the phrases beginning with C would then increase from 15 to 19, but the cost of the phrases beginning with B would decrease from 16 to 10. If two of the DCBAs were CDBs instead, then the cost of phrases beginning with D would not increase at all. So the worst kind of message from the point of this encoding is that two of the phrases that begin with B have no following A. The total cost of such a message is 25 + 15 + 6 + 10 + 15 = 71.

The second encoding uses only jargon terms having no exceptions:

 A–1 B–01 CB–000 DCB–001

There will 5 DCBs with a cost of 15; 5 additional CBs (i.e., not included in the DCBs) with a cost of 15; five additional Bs with a cost of 10; and 30 As with a cost of 30. This gives a total of 15 + 15 + 10 + 30 = 70. Exceptions make jargon less attractive.

Solution to Using Your Marbles

  1. On Tuesday, she again starts with 5 red marbles, 6 blue marbles, and 7 white ones. In order to make the bags more varied, she wants to pack the bags so that each bag differs by at least two “in-or-outs” from any other bag. An in-or-out consists of inserting a marble into a bag or removing a marble from a bag. So, for example, R and RR differ by only one in-or-out (insert R). On the other hand, RW and RB differ by two (remove W and insert B). How many different bags of marbles can Carol create that differ by at least two in-or-outs?

R, B, W, RBB, BWW, WWW, RBW, BRR; 8 altogether. The method here was to build up from small numbers. There could be no bags with two marbles because then they would differ from the singleton bags by just one marble.

  1. On Wednesday, Carol is given a bag that she cannot open. She knows she has 18 marbles. She also knows that there is at least one red, at least one blue, and at least one white. Knowing only this information but before seeing the marbles, Carol receives a phone call. “Can you guarantee to be able to give each child a bag with a different collection of marbles (i.e., at least one in-or-out apart), if there are eight children at a party? If not, then can you make this guarantee if there are seven children? If so, then how about nine?” How should Carol answer?

If there is one R, one B and 16 Ws, then the best that Carol can do is to pack seven bags: W, R, B, WW, WWW, WWWW, WWWWW. The other W can be put either in a bag with a R or a B. Eight is not possible in this case because once there are only white marbles left, bags may differ only by number. If there are more than one red (or blue or both) marbles, but still fewer of these than white ones, then those red or blue marbles can be substituted for white ones in the larger bags. So Carol can guarantee at least seven bags, but no more.

  1. On Thursday, she again knows only that she has 18 marbles and at least one of each color, and is in the same situation as Tuesday as far as wanting the children enjoy variety. So she wants each gift to differ from every other by at least two in-or-outs. How many children can she guarantee to prepare bags for in that case?

Again, if there is just one R, one B, and 16 whites, Carol can guarantee to prepare just six bags: W, R, B, WWW, WWWWW, WWWWWWW. She can’t do better, because, once she gets to white marbles alone, the bags must differ by two marbles each.

  1. On Friday, her friend Diane packs the bags. Diane assures Carol that (i) every bag contains a different (some difference of number in at least one color) collection of marbles, (ii) that there are 18 marbles to start with, (iii) there are more white than blue and more blue than red, and (iv) that the maximum different ones she could pack was seven. Assuming that Diane is an extremely capable packer, what is the maximum number of red marbles there could have been at the beginning?

If there is one red and two blues, this works: W, B, R WW, BW, WWW, WWWW.

There are not enough marbles left for a bag containing five white marbles. If there are two reds and three blues, then it is easy to form eight bags even without using all the marbles: W, B, R, WW, WB, WR, WWW, WBW.

If there are two reds and at least five blues (the maximum number of blues is seven because there must be more whites than blues), we could have at least eight bags, e.g., W, B, R, WW, WB, WR, BB, WBW. Even more are possible, but by not specifying them, we handle more cases.

Similarly, if there are three reds and at least four blues, we could have at least eight bags, e.g., W, B, R, WW, WB, WR, BB, WRW.

Similar reasoning holds if there are four or five red marbles. Five is the upper limit of red marbles. There cannot be six or more red marbles, because then there would have to be at least seven blue marbles, leaving only five white ones.

Solution to Flipping Colors

  1. Can the knight flip the color of every square an odd number of times? If so, in how few moves?

Steve Schaefer has found a 25 move solution which he describes as follows: “If we take the upper left corner to be square (0,0), then the first twenty moves flip the outer three rows/ columns of the board, returning to the starting point at square (2,3). The four squares in the center of the board have not been flipped, and the starting square has been flipped twice. The remaining squares (2,3), (3,3), (3,4), (4,3), and (4,4) can be flipped with the following five moves”:

 (3,3), (4,3), (4,2) (4,3), (4,4), (3,4) (3,3), (3,2), (2,2) (3,2), (4,2), (4,3) (3,3), (2,3), (2,2)

With that introduction, we can now present Schaefer’s 25-move solution, starting at (2,3):

  1: (1,3), (0,3), (0,2)  2: (0,1), (0,0), (1,0)  3: (1,1), (1,2), (2,2)  4: (2,1), (2,0), (3,0)  5: (3,1), (3,2), (4,2)  6: (4,1), (4,0), (5,0)  7: (6,0), (7,0), (7,1)  8: (6,1), (5,1), (5,2)  9: (6,2), (7,2), (7,3) 10: (6,3), (5,3), (5,4) 11: (6,4), (7,4), (7,5) 12: (7,6), (7,7), (6,7) 13: (6,6), (6,5), (5,5) 14: (5,6), (5,7), (4,7) 15: (4,6), (4,5), (3,5) 16: (3,6), (3,7), (2,7) 17: (1,7), (0,7), (0,6) 18: (1,6), (2,6), (2,5) 19: (1,5), (0,5), (0,4) 20: (1,4), (2,4), (2,3) 21: (3,3), (4,3), (4,2) 22: (4,3), (4,4), (3,4) 23: (3,3), (3,2), (2,2) 24: (3,2), (4,2), (4,3) 25: (3,3), (2,3), (2,2)

Solution to Scheduling Tradition

  1. Can you form an 11-day schedule for these teams that satisfies the constraints?

This solution is from the classic puzzle book of Dudeney written a century ago. The pairs for each day are separated by spaces. You’ll see that except for the A column, all other columns are different rotations of BEGFKHICDLJ, a sequence of letters in which all the non-A letters appear exactly once. Consider some letter, say E. By construction, E is in each one of the non-A columns exactly once. Therefore E plays against A. Also, by construction, E is in a given day’s row exactly once (because all the rotations differ).

  • day 1: AB CD EF GH IJ KL

  • day 2: AE DL GK FI CB HJ

  • day 3: AG LJ FH KC DE IB

  • day 4: AF JB KI HD LG CE

  • day 5: AK BE HC IL JF DG

  • day 6: AH EG ID CJ BK LF

  • day 7: AI GF CL DB EH JK

  • day 8: AC FK DJ LE GI BH

  • day 9: AD KH LB JG FC EI

  • day 10: AL HI JE BF KD GC

  • day 11: AJ IC BG EK HL FD

How do we know that E encounters every other letter? In every pair of columns, except the leftmost, E encounters two teams. We just have to ensure that the teams in question are different every time. This is the case because if we list the pairs of columns by the offset from the first letter in the sequence BEGFKHICDLJ, then we see that the column paired with A begins at B, so is at an offset of 0. The next pair begins with C and D respectively at offsets 7 and 8, so their offsets differ by 1. We’ll denote that fact by the sentence “CD are at offsets 7 and 8, so the offsets differ by 1.”

  • CD are at offsets 7 and 8, so the offsets differ by 1

  • EF are at offsets 1 and 3, so the offsets differ by 2

  • GH are at offsets 2 and 5, so the offsets differ by 3

  • IJ are at offsets 6 and 10, so the offsets differ by 4

  • KL are at offsets 4 and 9, so the offsets differ by 5

Because the various column pairs differ by different offsets, a given pair XY will meet in only one column pair. For example, E and B will meet in the column pair with offset 1, as will E and G. E and F as well as E and J will meet in the column pair with offset 2 and so on. Whereas we have used E as an example, the same reasoning holds for all letters B to L. The case of A is clear by inspection of the first two columns.

Solution to Fractal Biology

  1. Can you achieve the wounded distance two condition for eight nodes using 16 links, assuming no node can have more than four interaction links?

With eight nodes, one needs only 16 links, as demonstrated in Figure 1-18.

image from book
Figure 1-18: An eight-node network in which there is a two-link path from any node to any other even if a node is deleted. Within one foursome, there is a single link from any node to any other. For a trip from a node n in one foursome to a node m in the other foursome, there are two paths of length at most two. These go through different intermediate nodes.

  1. What is the fewest number of links needed to achieve the wounded distance two condition for 12 nodes and at most five interaction links for any node?

With 12 nodes, one could design three sets of four nodes that are completely connected (requiring 3 × 6 = 18 nodes) and then add in four links between every pair of four, leading to an additional 12 links for a total of 30 altogether.

  1. What is the fewest number of links needed for 12 nodes, but without any limit on the number of interaction links any particular node can have?

If there is no limit on the number of links per protein node, try the design of Figure 1-19. You would need only 21 links. This is called the two-fan design, because each hub creates a fan. One needs two fans in case one hub is wounded.

image from book
Figure 1-19: A twelve node, wounded distance two network using only 21 links.

  1. We have a particular network having 108 proteins. We don’t yet know all the interactions. What is the fewest number of links needed to achieve the wounded distance two condition for any pair among these 108 nodes if there is a limit of 60 interactions that any single node can have? Try for a solution that uses fewer than 600 links.

Divide the nodes into 96 nodes (the base nodes) and 12 other nodes that will perform a connection function (the switchboard nodes). Number the 96 base nodes from 1 to 96. Call 1 through 24 the A nodes, 25 through 48 the B nodes, 49 through 72 the C nodes, and 73 through 96 the D nodes.

A good way to conceptualize the problem is that each base node group forms the supervertex of a tetrahedron (large black circles). Along each of the six edges linking supervertices there are two “switchboard” nodes, as you can see in Figure 1-20.

image from book
Figure 1-20: Switchboard nodes are clear circles. Supervertices, each consisting of 24 nodes, are big dark circles. The basic design represents 576 links. In addition, we have shown two links (the dashed curves) between non-neighboring switchboard nodes. Altogether there are 12 such links.

Each switchboard node is connected to every node in the two supervertices that form its edge (2 × 24 links). Thus, the total number of links involving base nodes is 2 × 2 × 24 × 6 = 576. Two switchboard nodes are “neighbors” if they share a supervertex. Each switchboard node needs a direct link to non-neighbors only. Each switchboard is the neighbor to all but two of the other switchboards. Linking to those non-neighbors adds a total of 12 × 2 = 24 to the collective degree of the switchboard nodes. This translates to half that number of edges, for a total of 12. This yields a total of 588 = 576 + 12 links.

This solution is due to TJ Takei.

Note 

The network problems here fit a general formulation: Find the minimum number of links necessary for N nodes, maximum distance D between any pair of unwounded nodes where the maximum degree (number of interactions per node) K, and there may be up to X wounded nodes. Such questions might bear distinct relevance to biology. Here are two mathematical biology papers on similar topics that I like very much:

“Regulation of metabolic networks: understanding metabolic complexity in the systems biology era,” Lee Sweetlove and Alisdair Fernie, New Phytologist 2005, 168: 9-24.

“Evolutionary capacitance as a general feature of complex gene networks,” Aviv Bergman and Mark Siegal, Nature 2003 Jul 31; 424(6948): 549-52.

Solution to As Easy as Pie

  1. Find a cut design obeying the above three rules that minimizes the total length of the perimeters for five pieces.

Make the first cut with a width of 1/5 of the total, and the second cut perpendicularly with a width of 1/4. Repeat the process with ever-wider widths (4/15 and 3/8) to keep the same area until the last cut halves the remaining piece (see Figure 1-21). This gives a total perimeter of 10 1/6 or about 10.17.

image from book
Figure 1-21: Recursive perpendicular cuts satisfying the child cut rule.

  1. How much better can you do for five pieces, after dropping the child cut rule?

The best design for five pieces using cuts that are parallel to the edges of the original square is shown in Figure 1-22. Rectangles 1, 2, 3, and 4 all have dimensions b by 1-b (having an area of 1/5). This gives a total perimeter of 4(2 + 1/5), which is about 9.8.

image from book
Figure 1-22: Without the child cut rule, one can obtain a smaller total perimeter for five pieces.

  1. What about nine pieces?

For nine pieces without the child-cut rule, we get the similar design in Figure 1-23. This gives a perimeter of about 15.3.

image from book
Figure 1-23: Without the child cut rule, here is the best design I know of for nine pieces.

  1. Can you get a smaller perimeter for five pieces, if, in addition to dropping the child cut rule, you drop the rule that cuts must be parallel to the original sides?

A beautifully economical design for five pieces without the child cut rule that permits cuts at an angle to the original edges is shown in Figure 1-24. The slant is about 19 degrees from the horizontal. This gives a total perimeter of under 9.4.

image from book
Figure 1-24: Without the child cut rule or the requirement that all cuts must be parallel to an original side, one can reduce the perimeter quite noticeably.

TJ Takei provided these solutions, the best I know of for these problems. For the tilting calculation of question 4, he used Gnu Octave.

Solution to Lucky Roulette

  1. Do you ask for a spin of the chamber before the third trigger pull? How about before the fourth trigger pull? Using your best strategy, what are your chances of survival ignoring the first trigger pull?

If you don’t have me spin again now, you have a 1 in 3 chance of dying in the third pull of the trigger, which is the same as if you ask me to spin. But the advantage of spinning is that if I spin now and you survive the third trigger pull, then you will return to the 1 in 4 chance of dying for the last trigger pull.

Using that strategy (don’t spin before the second trigger pull, spin before the third, and don’t spin before the fourth), your chances of survival beyond the first trigger pull are 3/4 × 2/3 × 3/4 = 3/8. If you had asked me to spin every time, your chances of survival would be only 8/27, a little less than 1/3. Sorry for the violent setting, but I hope it helped focus your mind.

Solution to Legal Logic

  1. If each patient has a 0.005 probability of being hurt, how likely is it to get one failure within the first 50 patients?

As preparation for our later work, write a procedure using a random number generator that produces a 1 with a probability of 0.005 and a 0 otherwise for 50 times. Count the number of 1s. Call this procedure 10,000 times. You can see that there will be one failure in the first 50 patients approximately 22% of the time. This result can also be obtained analytically: probability of at least one failure = 1 - probability of no failures in 50 = 1 - (1 - probability of failure each time)50 = 1 - (0.99550) = 0.222.

  1. Suppose that 18 people out of 4,000 have been hurt by the device. There are eight hospitals, each of which has treated 500 patients. What is the likelihood that at least one hospital exceeds a 1% test, i.e., has 6 patients who were hurt, even if the underlying chance of failure were in fact independent of hospital and is 0.005 per patient?

To solve this question empirically, create groups corresponding to 1 to 500 inclusive, 501 to 1,000 inclusive, , 3,501 to 4,000 inclusive. Do the following 10,000 times: Take the 18 numbers uniformly at random without replacement (i.e., never take the same one again) over the range of 1 to 4,000. See if any group has more than 1% failures. You will see that between 14% and 15% of the time, at least one group does. This again is a false guilty rate.

  1. Suppose the distribution of patients were much more skewed, with seven hospitals having treated only 200 patients each and the remaining one having treated all the rest. How likely is it that at least one of those smaller hospitals had hurt three people or more under the same conditions (i.e., 18 people who were hurt overall and the underlying chance of failure is 0.005 per patient)?

In this case, the tort lawyers will win around 35% of the time. The method is similar to the method we used to answer question 2 except the groups correspond to 1 to 200 inclusive, 201 to 400 inclusive, , 1,201 to 1,400 inclusive, and then 1,401 to 4,000.

  1. Suppose we say that if a single hospital has more than 2% of bad outcomes, then that is a bad hospital. Assume that each hospital treats 500 patients. What is the likelihood for at least one hospital to hurt 10 or more people if the 0.005 hurt probability held and there were no bias in any hospital?

This would happen only about 5 times out of 10,000. This is so unlikely that there is a good chance the hospital really is bad.

  1. How would you answer the last question in the case that seven hospitals each treat only 200 patients and one treats 3,600?

In this case, the probability increases to about 2.5% (250 times out of 10,000). I’d still avoid that hospital.

Solution to The Box Chip Game

  1. For four chips and two guesses per color, can you figure out a way to win at least 1/6 of the time, assuming all rearrangements are equally likely?

  • Black guess: 1, 2

  • White guess: 1, 2

  • Red guess: 3, 4

  • Green guess: 3, 4

You win 4 times out of the 24 (= 4!) possible permutations. Why 4? Because black and white can be in either order among the first two boxes; red and green can be in either order among the last two boxes. Because 4 out of 24 is 1/6, you win 1/6 of the time. I think that’s the best one can do. If you can show me wrong, then please let me know.

  1. What if there are six chips and three guesses per color?

For six chips with three guesses per color, the best is 1/20. As puzzlist Alan Dragoo puts it: “Guess the same half of the boxes for one half of the chip colors and the other half of the boxes for the other half of the chips. This gives a probability of (n/2)! × (n/2)! / n! of winning.” For 6, this comes out to be 3! × 3!/6! = 36/720 = 1/20.

  1. How does this probability generalize for n chips (n even) and n/2 guesses per color?

Following Alan Dragoo’s suggestion is the best approach I know of. It works for all even numbers, which is fine because the puzzle allows only even numbers. Unfortunately, (n/2)! × (n/2)! / n! rapidly approaches zero. For 8, it’s only 1.4%. For 12, it’s 0.1%.

  1. How well can you do in the case of four chips and two guesses under the procedural agent assumption? (Hint: You can win over 40 percent of the time.)

Identify the colors with numbers. This is an arbitrary association, but fixing it consistently will help.

  • Black – 1

  • White – 2

  • Red – 3

  • Green – 4

Here are the programs. The boxes are numbered 1, 2, 3, and 4 in left-to-right order.

  • Black: 1; Blackwin, White2, Red3, Green4

  • White: 2; Whitewin, Black1, Red3, Green4

  • Red: 3; Redwin, Black1, White2, Green4

  • Green: 4; Greenwin, Black1, White2, Red3

The notation means the following: Black (the agent associated with the black chip) starts at box 1; if box 1 has a black chip, then Black wins; if box 1 has a white chip, then Black next opens box 2; if box 1 has a red chip, then Black next opens box 3; and so on. Suppose box 1 has a white chip, so Black opens box 2. If box 2 then has a green chip, then Black next opens box 4.

This strategy wins in 10 out of the 24 possible permutations. To see why, consider any ordering of the colors in boxes.

Open table as spreadsheet

1

2

3

4

win/lose

Black

White

Red

Green

win

Black

White

Green

Red

win

Black

Red

White

Green

win

Black

Red

Green

White

lose

Black

Green

White

Red

lose

Black

Green

Red

White

win

White

Black

Red

Green

win

White

Black

Green

Red

win

White

Red

Black

Green

lose

White

Red

Green

Black

lose

White

Green

Black

Red

lose

White

Green

Red

Black

lose

Red

Black

White

Green

lose

Red

Black

Green

White

lose

Red

White

Black

Green

win

Red

White

Green

Black

lose

Red

Green

Black

White

win

Red

Green

White

Black

lose

Green

Black

White

Red

lose

Green

Black

Red

White

lose

Green

White

Black

Red

lose

Green

White

Red

Black

win

Green

Red

Black

White

lose

Green

Red

White

Black

win

Notice that there are 10 wins here.

  1. How well can you do if you have six chips and three guesses?

Out of the 720 possible permutations of the 6 elements, you can win 276 times (or more than 38 percent of the time). The agents all agree in advance on the following arbitrary association A of boxes to colors. These indicate which box next to pick based on the color found in the previous box.

  • Blackbox 1

  • Whitebox 2

  • Redbox 3

  • Greenbox 4

  • Bluebox 5

  • Orangebox 6

Here is the program for the agent corresponding to color X. Start at the box corresponding to color X in association A. If you find X, then you’ve won, so you can stop. Otherwise, look up the color you find, say Y, in A and look at that box. If you find X, then you’ve won, so you can stop. Otherwise, look up the color you find, say Z, in A and look at that box. If you find X, then you’ve won. Otherwise, you lose.

For example, White does the following:

  • White: 2; Black1, Whitewin, Red3, Green4, Blue5, Orange6;

Note 

I first heard this puzzle from Peter Winkler. Michael Rabin explained the procedural agent idea to me. What is truly fascinating about this problem is the use of arbitrary assignments. In this, as well as in several other puzzles described or invented by Winkler, the idea is to commit everything to one approach. You either win all the time or lose often - kind of a start-up mentality.

By the way, if you happen to be an algebraist, characterizing when the contestant wins can be expressed succinctly: You win whenever the colors arranged in the boxes guarantee that the longest cycle in the arbitrary permutation is of length n/2 or less. Let me explain. First, notice that the mapping from colors to numbers is a permutation (a rearrangement) of those colors. For example,

  • Blackbox 1

  • Whitebox 2

says that Black should be put first in the list, White should be put second, and so on. We don’t move the chips, of course, but it is in this way that we can think of these assignments as a permutation. Now, a cycle occurs when following the box colors happens to return us to a box we’ve already visited.

Suppose, for example, that the six boxes have the following colors:

  • 1: Blue

  • 2: Black

  • 3: Red

  • 4: Orange

  • 5: White

  • 6: Green

and we use the arbitrary permutation:

  • Blackbox 1

  • Whitebox 2

  • Redbox 3

  • Greenbox 4

  • Bluebox 5

  • Orangebox 6

Suppose we are looking for Red. The permutation says to go to box 3. As it happens, we find Red in box 3. So the length of the Red cycle is one. Suppose we are looking for Black.

  • The permutation says to look in box 1.

  • We find Blue.

  • So the permutation says to look at box 5.

  • We find White.

  • So the permutation says to look at box 2.

  • We find Black.

At this point we win, but note that the next block to look at would be box 1. That is, the color of the chip that sends us back to the first box we looked at must be the color we are looking for. (For Black, we first look at box 1 and the color of the chip in box 2 would send us back to box 1.) In this case, the cycle in the permutation is of length three 1521. In fact, for this permutation, every cycle is of length three or less.

On the other hand, suppose that the arrangement of the colors in the boxes were:

  • 1: Blue

  • 2: Black

  • 3: Red

  • 4: White

  • 5: Orange

  • 6: Green

Now consider starting with White using the same permutation as above:

  • Start in box 2.

  • Find Black.

  • Go to box 1.

  • Find Blue.

  • Go to box 5.

  • Find Orange.

  • Go to box 6.

  • Find Green.

  • Go to box 4.

  • Find White.

In this case, we have 215642. This is a cycle of length five.

Moral of the story: A succinct statement of the solution may not lead to a succinct explanation of the statement.

Solution to Feedback Dividends

  1. If Pgood is 0.9, what is the probability of hitting row 8, column 5 using the FeedYes strategy and using the FeedNo strategy?

The probability of winning under FeedYes is about 0.89 when Pgood is 0.9. It cannot be better than 0.9 because if you are on the second from top row and adjacent to the winning square, there is some chance of losing. The strategy for FeedYes consists of trying to reduce the horizontal distance to the goal to zero (or to one on rows that are an odd distance away). Calculating the probability involves the following key recurrence ideas: If you are n steps (n > 0) away from the top row and 0 distance away, then your probability of hitting is the same as the probability of hitting if you are n-1 steps away and one away from column 5. Otherwise, with probability Pgood you will be one column closer to your goal and with probability 1-Pgood you will be one farther from the goal, both with n-1 steps.

Under the FeedNo strategy, the probability is about 0.55 when Pgood is 0.9. To compute the probability for FeedNo, assume a strategy that will take four aims to the right and three aims to the left since this gives the best likelihood of hitting the destination. Given this, there are four ways to win: All seven aims are true; three of the right aims are true and two of the left are; two of the right are true and one of the left; and one right and zero left.

So, the feedback dividend is about 1.6.

  1. For which value of Pgood does the feedback dividend reach its greatest value? What is the feedback dividend in that case?

The value of Pgood for which the dividend ratio is highest is not 0.75 as one might think, but something around 0.772. For that value, the dividend ratio reaches 1.99.

  1. If we cut off the three rightmost columns and the two leftmost columns, then which value of Pgood would give the highest feedback dividend? Assume that falling off the board gives a certain loss.

In both the FeedNo and FeedYes cases, the analysis combines short trips: One wants to go from row 1, column 4 to row 3, column 4 in the first two moves, then to row 5, column 4 in the second two moves, then row 7, column 4, and finally row 8, column 5. For FeedNo, this gives a formula like the following:

image from book

For FeedYes, this gives a formula that is Pgood4.

This gives a maximum feedback dividend of 1.75 when Pgood is 0.71.

Alan Dragoo contributed greatly to these solutions.

Note 

I have a friend who likes to swim backstroke in swimming pools that have no lane markers. He admits to colliding with other people a lot.

Solution to Number Clues

  1. For the first p, q pair, lcm(p,q) = 60, p × q = 240. What are p and q?

Here are the possible ways for two numbers to divide 240 and satisfy the lcm constraint.

  • 4, 60 - No good because p and q must both be two digit numbers.

  • 12, 20 - This could work.

So the solution must be 12, 20.

  1. The product of p and q is 140. gcd(p,q) = 2. What are p and q?

The factors of 140 are 2, 2, 5, and 7. Both p and q have 2 as a factor. So, the possible values of p and q are:

  • 2, 70 - No good because p and q must both be two digit numbers.

  • 4, 35 - No good for same reason.

  • 10, 14 - Possible.

  1. There is a pair p and q whose lcm(p,q) = 35. What are p and q?

Because lcm(p,q) = 35, at least one of p and q must have one 5 and at least one must have one 7. To satisfy the two-digit requirement, they both must. So p = q = 35.

  1. The sequence of numbers that opens the safe is a permutation of five numbers from the hints. The sequence enjoys the following property: The greatest common divisors between neighbors of the sequence strictly increase as one proceeds from left to right in the sequence, but all greatest common divisors are single digits. What is the sequence that opens the safe?

The numbers that open the safe are: 10 12 20 35 14. The greatest common divisor between 10 and 12 is 2; the greatest common divisor between 12 and 20 is 4; between 20 and 35 is 5; and between 35 and 14 is 7.

Solution to Mind Games

  1. What is the smallest number of bit questions sufficient to determine a five-bit sequence no matter what it is? Remember that none of your bit questions need be correct. You just need to know how to crack the secret at the end.

Five are sufficient, even if all answers come at the end. Here are the guesses:

  • 10000

  • 11000

  • 11100

  • 11110

  • 11111

Here’s why this is enough. Number the positions from left to right at 1, 2, 3, 4, and 5. Consider position i. Suppose that there is a 1 in position i, for i > 1. Then the number correct will increase by one in the reported answers of the ith bit question compared to the previous 1. For example, suppose the fourth position has a 1. That is, the secret code has the form xxx1x. Then 11110 will have a number count that is one higher than 11100. Conversely, if the fourth position has a 0, then 11110 will have a number count that is one less than 11100.

So far, we know about all bit positions except i = 1. We can determine the value of that bit position by examining the number correct for 10000. For example, if the last four digits are 0110, then the secret code is either 00110 or 10110. If the secret code were 00110, then the first bit question (10000) would yield a number correct of 2. If the secret code were 10110, then the number correct would be 3.

It turns out that four questions are enough also. Make the first two questions be 00000 and 11100. In most cases, this narrows down the bit sequences quite substantially. For example, if the answers to these first two questions are 4 and 1, respectively, then the remaining possibilities are: 00001 and 00010. So we will focus on the two difficult situations where there are six possibilities left. Those situations occur when the answers to the first two questions are either 2 and 3 respectively or 3 and 2 respectively. Suppose the answers to the first two questions are 2 and 3 respectively. Then the remaining possibilities are:

  • 01101

  • 01110

  • 10101

  • 10110

  • 11001

  • 11010

Let’s say we ask 01110, a codeword that differs by at least two bits from both of the first questions. This will give us the following possible answers:

  • 0 - impossible

  • 1 - remaining possibilities: 10101 and 11001

  • 2 - impossible

  • 3 - remaining possibilities 01101, 10110, and 11010

  • 4 - impossible

  • 5 - remaining possibility 01110

The only difficult case is the one where the remaining possibilities are 01101, 10110, and 11010. In this case, we guess 00101. There are only three possible answers, each of which leads to a unique conclusion.

  • 0 - 11010

  • 2 - 10110

  • 4 - 01101

Philip Daniel Rogers suggested this approach to a solution. There is no elegant closed form proof that this always works and the program is a bit complex. However, the program for the case when there is no feedback is quite simple, as you’ll see next.

  1. Suppose we change the game to make it a little more difficult: You ask some number of questions and then I answer them all after you have asked all your questions. In that case, how many bit questions are sufficient?

Surprisingly, the very same guesses used in question 1 give a unique “signature” for each possible secret. That is, you make the following guesses:

  • 00000

  • 11100

  • 01110

  • 00101

Each possible sequence of five bits will give four answers and those answers differ for any pair of sequences. For example, suppose 11001 is the secret. That secret yields an answer of 2 correct to the bit question 00000, 3 to the bit question 11100, 1 to the bit question 01110, and 2 to the bit question 00101. No other bit sequence produces this same signature of answers: 2 3 1 2. Thus, you make the above four guesses, look at the signature, and determine which is the secret bit sequence. Writing a program to compute the signature given a series of guesses is, of course, very easy.

  1. Given that the low info answers come at the end and there is no answer at all to the first bit question, is it possible to find the secret number, no matter what it is? If so, how many bit questions (counting the first one) do you need to pose in order to guarantee finding the secret number?

If, after the first guess, you are told only whether the number of correct ones has increased or decreased and all these answers come at the end, then six bit guesses suffice to know the secret code. Start with 00000.

  • 00000

  • 10000

  • 11000

  • 11100

  • 11110

  • 11111

You can determine whether each bit position is 1 or 0 by whether the number correct has increased or decreased relative to the previous one.

  1. How many bit questions do you need, if you hear the low info answers immediately after posing them?

If the answers are given immediately after they are asked, Tom Rokicki suggests the following approach requiring only five questions altogether. The basic idea is to use the fact that three answers are possible (same (S), increase (+), or decrease (-)) whereas our method for the previous question used only + and -. Start with 00000 again. Next guess 3 (that is 00011 in binary). This changes two bits, so if the answer is S (same number correct), then one of these 1s is correct and one isn’t. On the other hand, if the answer is + (increase), then both must be correct. Similarly, if the answer is -. Continue with this reasoning and you’ll find the answer. If you get stuck, see the following:

 [3,    S:[29,      SS:[4,          SS+:[14,             SS+-:{5},             SS+S:{22},             SS++:{14}],          SS-:[14,             SS--:{17},             SS-+:{26},             SS-S:{9}]],       S-:[5,          S-S:{10,18},          S-+:[10,             S-+S:{6},             S-++:{2},             S-+-:{1}]],       S+:[5,          S+S:{13,21},          S+-:[10,             S+-+:{30},             S+--:{29},             S+-S:{25}]]],    -:[5,       -S:[9,          -SS:{0,16},          -S+:{8,24}],       -+:[8,          -+-:{4,20},          -++:{12,28}]],    +:[5,       +-:[8,          +-+:{11,27},          +--:{3,19}],       +S:[9,          +SS:{15,31},          +S-:{7,23}]]]

The answers to the left of the colon are the previous answers. The indentation gives precedence. For example, +:[5 means “if the answer to the first question is +, then ask about 5 (00101).”

  1. How does this generalize if the code is N digits long, but the digits could be base b for some b > 2? For example, b could be 10 and all digits 0 through 9 would be allowed. I have a solution when all answers come at the end, but that requires 1 + (b-1) × N questions. I don’t think it’s optimal.

We can use the reasoning of question 3. Here are the first questions to find a secret that is base 10.

  • 00000

  • 10000

  • 20000

  • 30000

  • 40000

  • 50000

  • 60000

  • 70000

  • 80000

  • 90000

  • 01000

  • 02000

  • 03000

  • 04000

  • 05000

  • 06000

  • 07000

  • 08000

  • 09000

This yields a set of questions of size 1 + N(b-1), but I await a clever reader to suggest a better approach.

Solution to Refuse and Reveal

  1. Rosalba: “I have five integers that may or may not be distinct.”

    Quentin: “What is the minimum?”

    Rosalba: “20.”

    Quentin: “Which of these would not allow me to infer all their values: number distinct, mean, maximum, or median?”

    Rosalba: “Only the median.”

    Quentin: “Great. I know the numbers.”

    What are they?

They must all be 20. Any of the other statistics would reveal this fact for sure.

  1. Rosalba: “I have seven integers that may or may not be distinct.”

    Quentin: “What is the minimum?”

    Rosalba: “20.”

    Quentin: “Which of these are you willing to tell me (i.e., would not allow me to infer all their values): mean, median, and maximum?”

    Rosalba: “All of them.”

    Quentin: “Ok, what is the maximum?”

    Rosalba: “21.”

    Quentin: “I know which of mean and median you’re willing to tell me now.”

    Which? Why?

The mean would be enough to determine how many 20s and how many 21s there are. For example, if there are four 20s and three 21s, then the mean would be about 20.43. Knowing the median would not be enough because a median of 20, for example, could result from a situation in which there are three 20s and four 21s or five 20s and two 21s.

  1. Rosalba: “Can you find some situation in which I would be happier to tell you the mean rather than the median?”

    Quentin: “Could you give me a hint?”

    Rosalba: “In an example I can think of, there are three numbers, two of them distinct.” Give it a try.

Suppose there are three numbers and two distinct ones. If you know that the minimum is 20 and the median is 23, then you know that the other number is also 23. On the other hand if you know that the mean is 22, for example, this could result from 20, 20, and 26.

  1. Rosalba: “Can you find some situation in which all of minimum, maximum, mean, and median are necessary and sufficient to find the identities of five numbers that are all integers?”

Minimum is 20, maximum is 22. Mean is 21.2. This could be explained by three 22s and two 20s or by two 22s, two 21s, and one 20. The median will tell you which.

  1. Rosalba: “There are 17 numbers that are not all distinct. Their minimum is 30, mean is 34, and median is 35.”

    Quentin: “What is their total distance to 35?”

    Rosalba: “I won’t tell you, but the total distance to 35 is five less than the total distance to 38. Whoops! I shouldn’t have told you that.”

    Quentin laughing: “You’re right. Now I know the numbers.”

    What are they?

The point to which the total distance is guaranteed to be at a minimum is the median. That is, if there are an odd number n of values altogether, then the sum of the distances to the median is smaller than to any other point. Here is why. Suppose the median is m. Let the total distance to m be x. Now consider the total distance to m+1. Because m is a median, there are (n-1)/2 values that are m or less. So, at least 1+(n-1)/2 values increase their distance by 1 when calculating the total distance to m+1, including the median again. At most (n-1)/2 values decrease their distance by at most 1 when calculating the total distance to m+1.

So if there are 17 numbers and the median is 35, the mean is 34 and the distance to 38 is x+5, then there is one instance of 35, no instances of 36, and one instance of 37. Here is why: If there were one instance of 35, zero instances of 36 and 37, then the distance to 38 would increase by 3 × 8 = 24 for the numbers below 35 and decrease by 3 × 8 for the numbers greater than or equal to 38. The distance for 35 itself would increase by 3. This would give a net increase of 3. The fact that there is a net increase of 4 must mean that there is an instance of 37. In this case the net decrease for the upper eight values would be 3 × 7 = 21 for those greater than or equal to 38 and 1 for 37. This implies that there are seven values that are 38 or greater and one 35 and one 37. Because the mean is 34, the total distance to the mean of the numbers greater than the mean is at least 1 + 3 + 28; we get those numbers from the 35, the 37, and the seven values that are 38 or greater. This total is 32. The values below the mean must match this, so they must all be 30s. Since they cannot be less than 30, the uppermost value cannot be greater than 38. So there are eight 30s, one 35, one 37, and seven 38s.

  1. How would your answer to this question change if there were 1701 numbers but otherwise the same information as in the previous question?

The reasoning is exactly the same as for 17 numbers with one 35, one 37, 849 38s, and 850 30s.

Tom Rokicki helped greatly to clarify the formulation of this puzzle.

Solution to A Biting Maze

  1. The challenge is to find a route to the final page, and to decrypt the words and phrases on the way. There are hints in the encrypted words (they form a sentence of natural poetry) and in other parts of the Web page. Give it a try.

Here is a list of the page ids along with the decryption of the words and phrases in those pages.

  • f42, if

  • f37, you

  • f33, would

  • f32, see

  • f39, all

  • f44, of

  • f28, nature

  • f52, gathered

  • f23, up

  • f51, at

  • f17, one

  • f13, point

  • f46, in

  • f53, all

  • f34, her

  • f29, loveliness

  • f14, and

  • f19, her

  • f36, skill

  • f0, and

  • f2, her

  • f30, deadliness

  • f16, and

  • f4, her

  • f10, sex

  • f49, where

  • f15, would

  • f31, you

  • f3, find

  • f12, a

  • f27, more

  • f7, exquisite

  • f8, symbol

  • f22, than

  • f9, the

  • f6, mosquito

  • f25, if you have found the whole route to get here, you have solved the maze. congratulations.

The great natural biologist Havelock Ellis wrote this sentence in 1920. Take a look before you squash the next bug.

Solution to Mad Mix

  1. We want a mixture having 2.7 liters of supertox in a three-liter mixture. We don’t want to pour more than 10 liters of any liquid laced with supertox down the drain. How can we achieve this?

First obtain 9 liters of supertox in the 10-vessel. (Fill the 10-vessel with supertox, pour the result into the 7-vessel until it fills, empty the 7-vessel into the reservoir, and then pour the three remaining in the 10-vessel into the 7-vessel. Now fill the 10-vessel again with supertox and fill the 7-vessel, which takes only 4 liters (since it already has 3). So the 10-vessel has 6 liters. Empty the 7-vessel into the supertox reservoir and pour the 6 liters from the 10-vessel into the 7-vessel. Now, fill the 10-vessel with supertox again. Pour into the 7-vessel until it is full, leaving 9 liters of supertox in the 10-vessel.) Then add one liter of water to the 10-vessel. Pour this mixture into the 7-vessel, leaving 3 liters in the 10-vessel having 2.7 liters of supertox.

  1. Can we get a mixture that is 2/9 supertox and the rest water without pouring any supertox down the drain?

First obtain 2 liters of supertox in the 10-vessel. To obtain 2 liters of supertox in the 10-vessel, fill the 7-vessel with supertox, and empty it into the 10. Repeat; but now there are 4 liters left in the 7-vessel. Return the 10 liters to the supertox reservoir. Now pour the 4 liters into the 10-vessel and refill the 7-vessel with supertox. Now pour from the 7-liter vessel to the 10. When you are done, there will be 1 liter left in the 7-vessel. Empty the 10-vessel back into the reservoir and then transfer the single liter from the 7-vessel to the 10-vessel. Fill the 7-vessel from the reservoir and pour into the 10-vessel, leaving space for two. Fill the 7-vessel again from the supertox reservoir and pour into the 10-vessel. This fills the 10-vessel and leaves 5 in the 7-vessel. Empty the 10-vessel back into the supertox reservoir and pour the 5 from the 7-vessel to the 10-vessel. Refill the 7-vessel. Pour from the 7-vessel into the 10-vessel, leaving two liters in the 7-vessel. Empty the 10-vessel back into the supertox reservoir and then pour the two liters from the 7-vessel to the 10-vessel. Finally, fill the 7-vessel with water and pour that into the 10-vessel.

  1. Can we get a 26% concentration of supertox in the 7-liter container?

With a big extra bucket, one can obtain any mixture satisfying a ratio p/(p+q) of supertox where p and q are whole numbers whose sum is 100 or less. Here’s what you do: Create p liters of supertox and dump them into the big extra bucket and then q liters of water and dump. Now you have that ratio. Suppose you want 53 liters, for example. You fill the 10-liter vessel five times and put the results in the big bucket. Now fill the 10-liter vessel one more time, pour into the 7-liter vessel. The three liters left over go into the big bucket. So, in this case, we want a mixture of 10 liters or more having a concentration of 26% of supertox. So, if we had 13 liters of supertox and 37 liters of water, that would do it. Fill the 10-liter vessel with supertox; pour into the big bucket. Fill the 10-liter vessel with supertox and pour it into the 7-liter vessel. The 3 remaining liters go into the big bucket. Empty the 7-liter vessel into the supertox reservoir. Obtaining 37 liters of water is easy, of course (three 10s and a 7). Once you have the mixture, fill the 10-liter vessel. There will be 2.6 liters of supertox in there.

This solution is due to Vlad Simionescu.

  1. Suppose that we want to get mixtures that are all supertox, 1/2 supertox, 1/3 supertox, 1/4 supertox, ... 1/50 supertox. You are allowed to specify the size of the two vessels. You may even include fractional sizes as long as the smallest vessel contains at least one liter and the largest contains at most 30 liters. In addition, you have the big bucket again with capacity over 100 liters. Which two vessel sizes would you choose in order to obtain each of these mixtures? Given those vessel sizes and given a target mixture, how would you obtain it?

Suppose we want all fractions up to 1/k. In our case, k = 50. Let the basic unit be 0.6 liters and then the smaller bucket can be 17 basic units or 10.2 liters, and the larger bucket can be 50 basic units or 30 liters. Because 17 is a prime number, we can generate any number k of units between 1 and 49 from these even without the extra bucket. Start by generating 1 unit of supertox and putting it in the big bucket (which can be very small in fact). Then generate k units of water and mix.

This extremely elegant solution is due to Ivan Rezanka. Ivan has shown how to solve this problem without the extra bucket. The basic hint is to put an amount of supertox in the large vessel such that when the vessel is subsequently filled to completion with water, the desired concentration is achieved. This can be done if the larger vessel is viewed as consisting of L units where L is the least common multiple of numbers 2, 3, , k.

Solution to Dig That!

In these solutions, call the north-south roads columns and the east-west roads rows. The center line is column 4 between Start and End.

  1. If the tunnel is at most eight blocks long and begins at Start and ends at End, then what is the minimum number of probe devices you would need to guarantee to determine the precise route of the tunnel in one hour?

Because the straight line road from Start to End is of length six, the most the tunnel can deviate from the center line is by one. For this reason, place probes one away from the center line at the intersections indicated by the circles in Figure 1-25.

image from book
Figure 1-25: Because you know the direction of the roads as they leave an intersection, each pair on each row determines the fate of the tunnel on that row and the next.

Here is the reasoning. If no probe detects a tunnel, then the tunnel goes north up the center. It has no choice, because there are no loops. If a tunnel goes through a probe (represented by a circle), we can tell whether it is coming from the south and whether it is going north, west, or east. For example, if neither probe at row 2 detects the tunnel, then the tunnel must go north up the center at least till east-west cross street 3. By contrast, if the tunnel approaches the probe at row 2, column 5 from the west and leaves going north, then we know the following: The tunnel went north from Start one block, then turned east one block (where it encounters the probe), then turned north one block (to arrive at column 5 and row 3). In general, each pair of probes at row i can determine which column (north-south road) was used in coming from row i-1 and which column is taken to go to row i+1. That information is enough to determine the east-west directions on each row.

  1. If you had only one probe, what is the minimum time it would take to guarantee to determine the outcome, assuming you could move the probe every hour?

You can guarantee to map out the tunnel in six hours. Test the points in Figure 1-25 in any order. Suppose every probe is negative in your first five attempts. No matter which probe point you leave out, there will be an ambiguity in the root (straight north up the center or a detour at the probe point still unexplored). You may be able to cut out early if a detour and a return to the central path has been detected, but this is not guaranteed.

  1. If the tunnel is at most eight blocks long and begins at Start and ends at End, then what is the minimum number of point probe devices you would need to place to guarantee to determine the precise route of the tunnel in one hour?

The problem can be solved with nine probes as in Figure 1-26.

image from book
Figure 1-26: The center column’s point probes tell whether the tunnel goes up the center line. If it misses the center, it must go either east or west. Because of the length limitation, it cannot go in both directions. Without reading the explanation, you may be able to figure out which column is taken between every pair of rows.

From row 1 to row 2, the tunnel takes column 3 if neither probe A nor B detects it. The tunnel goes up column 4 if A detects it, and column 5 if B detects it but not A. From row 2 to row 3, the tunnel goes up column 3 if neither C nor B detects it, column 4 if C detects it, and column 5 if B is the only one that detects it. From row 3 to row 4, the story is similar. The tunnel takes column 3 if neither D nor E detects it, column 4 if D detects it and column 5 if E detects it. Row 4 is just like row 2, so the pattern continues.

  1. How few point probes could you use over two hours to guarantee to find the tunnel?

In the first hour, just probe the center streets as in Figure 1-26. In the second hour, probe the eastern side of the first miss. If the tunnel is there, then it stays east until it rejoins the center. It cannot go to the western side because the tunnel is only eight blocks long.

Peter Carpenter of Wales contributed to these solutions. Tom Rokicki found the solutions for 3 and 4.

This problem becomes much more challenging if the tunnel could be longer. Solving it in general is an open problem.

Solution to Preferential Romance

  1. Are Bob and Alice compatible, passably compatible or neither? After dropping the smallest number of edges in zero, one, or both spouse’s preferences, try to find a consistent ordering.

They are passably compatible. Drop Bob’s MC edge. (He should care more about culture, it seems.) Here is a consistent ordering: BRKCWFPOEMHTJ.

  1. Can you describe an algorithm for the counseling company to use to help marriages in distress? That is, try to find a method so spouses have to drop as few preferences as possible.

The technique uses graph theory. Each circle in the figure is a “node” and each arrow represents a one-way edge. In graph-theoretic notation, let graph G1 = (N1, E1), where N1 is a set of nodes and E1 is a set of directed (one-way) edges on those nodes. Similarly, G2 = (N2, E2).

In the pseudo-code below, , means set union. For example {X, Y, Z} image from book {Z, W} = {X, Y, Z, W}. Also, means set intersect. For example {X, Y, Z} {Z, W} = {Z}. Finally, the minus sign (-) means set difference. For example {X, Y, Z} - {Z, W} = {X, Y}.

A cycle is a tour that starts at some node X and ends at X such that it always follows the one-way edges. A root is a node such that no one-way edge points to that node.

Here is the pseudo-code:

 Given  G1 = (N1, E1) and G2 = (N2, E2) H:= (N1 image from book N2, E1 image from book E2, EdgeMarks)   where EdgeMarks is of the same size as E1 image from book E2   If e is in E1  E2, then e has a null mark.   If e is in E1 - E2, then e has a mark of 1   If e is in E2 - E1, then e has a mark of 2. To find a consistent set of preferences, find the fewest   edges having marks of 1 or 2 such that dropping them eliminates   all cycles. To find a consistent ordering, start at a root of H   and write down its label. Then eliminate that root and   edges leaving that root from H, then repeat.

This pseudo-code guarantees to find a consistent ordering, because any cycle-free graph will be consistent. To obtain a cycle-free graph, it is sufficient to eliminate marked edges, because unmarked ones are shared and we know that each spouse alone has produced a graph without cycles.

This is not quite an algorithm in that I did not tell you how to find the fewest edges to drop. In fact, that problem is NP-complete (a topic we’ll discuss in Part II), so it is hard to find the fewest such edges, but heuristics such as simulated annealing (also discussed in Part II) work well.

Solution to No Change for the Holidays

  1. If you did not know how much the purchase price is except that it is a whole number amount between $1 and $100 inclusive, which amounts would you write on your three checks in order to minimize Claude’s change?

If the three checks are made out in the amounts of $15, $30, and $60, then you reduce the maximum change left with Claude to $14 no matter what the price. To see this, note that you can combine checks to sum up to $15, $30, $45, $60, $75, and $105. Because these sums differ by only $15, any amount between $1 and $100 is no more than $14 less than one of these sums.

  1. Suppose Claude publishes his four whole number prices in an advertisement that you see. Can you show how he can guarantee to do so in such a way that at least one item will yield him non-zero change no matter which three check amounts you write?

Claude might publish $50, $51, $53, and $59. Suppose you try to find a set of checks that eliminates all change no matter which item your teenager selects. Some check must be $50 or less or the $50 item would require change. If $50, then another must be $1, so the third must be $8, because every item must be purchasable. In that case, the $53 would require change. If your teenager has no check for $50, but one for a lesser amount x, then your teenager also requires a check for $50 - $x. The remaining check must reach $59, so $51 and $53 will require change. Claude wins.

Solution to Quiet in the Depths

  1. Assuming blood pressures are three digits long, can each department report one three digit number so that the captain can infer a blood pressure among the 10 middle blood pressures of the 25 sailors?

Each department sends back its median. Order them in ascending order and denote the medians as M1, M2, M3, M4, and M5 and their respective departmental groups as G1, G2, G3, G4, and G5. Figure 1-27 shows this ordering

image from book
Figure 1-27: The groups have been ordered in ascending order based on their medians. Thus, M1 M2 M3 M4 M5. Within each group imagine low blood pressures to be at the bottom of the line segment corresponding to that group and high blood pressures near the top.

The number M3 must be one of the middle nine of the 25 blood pressure values. Here is why: Two sailors in G3 have blood pressures less than or equal to M3, the three lowest blood pressures of G2 must be less than or equal to M3, and the three lowest pressures of G1 must be less than or equal to M3. This means that at least 2 + 3 + 3 = 8 sailors must have lower or equal blood pressures to M3. Those are colored black in Figure 1-28.

image from book
Figure 1-28: The dark rectangles represent values that are less than or equal to M3.

Symmetrically, at least eight sailors have blood pressures greater than or equal to M3. This means that M3 must be greater than or equal to at least eight values and less than or equal to at least eight values, so M3 is in the middle nine.

  1. The captain wants to find some blood pressure value that is sure to be less than or equal to the median and some blood pressure value that is sure to be greater than or equal to the median. Using the assumptions of the first question, can the captain obtain such values after one three-digit report from each department?

Again the captain asks for the medians of each department. Ordering them again as M1, M2, M3, M4, and M5, the captain knows that M1 is less than or equal to the median of the 25 blood pressure values and M5 is greater than or equal to the median of the 25 values. Here is why: M1 is less than or equal to two blood pressure values of G1 and three from each of the groups G2, G3, G4, and G5 for a total of 14 values. These are shown in Figure 1-29. So M1 is certainly less than or equal to the median. Symmetrically, M5 is certainly greater than or equal to the median.

image from book
Figure 1-29: The dark rectangles represent values greater than or equal to M1.

Note 

The idea of using the median of the medians comes from “selection algorithms” for “order statistics.” The paper that introduced this idea was written by some of the greatest algorithmists of all time: Blum, Floyd, Pratt, Rivest, and Tarjan.




Puzzles for Programmers and Pros
Puzzles for Programmers and Pros
ISBN: 0470121688
EAN: 2147483647
Year: 2007
Pages: 81
Authors: Dennis Shasha

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net