Merge pull request #22 from Necmttn/master

fix typos.
This commit is contained in:
Peter Norvig 2017-12-04 22:24:00 -08:00 committed by GitHub
commit 2c8184ad7e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
17 changed files with 43 additions and 43 deletions

View File

@ -108,7 +108,7 @@
" return len(possible_dates) == 1\n",
"\n",
"def hear(possible_dates, *statements):\n",
" \"Return the subset of possibile dates that are consistent with all the statements.\"\n",
" \"Return the subset of possible dates that are consistent with all the statements.\"\n",
" return {date for date in possible_dates\n",
" if all(stmt(date) for stmt in statements)}\n",
"\n",

View File

@ -30,7 +30,7 @@
"- `Move`: A *move* is a set of positions to flip. A position will be an integer index into the coin sequence, so a move is a set of these such as `{0, 2}`, which we can interpret as \"flip the 12 o'clock and 6 o'clock positions.\" \n",
"- `all_coins`: Set of all possible coin sequences: `{'HHHH', 'HHHT', ...}`.\n",
"- `rotations`: The function `rotations(coins)` returns the set of all 4 rotations of the coin sequence.\n",
"- `update`: The function `update(belief, move)` retuns an updated belief state, representing all the possible coin sequences that could result from any devil rotation followed by the specified flip(s). (But don't flip `'HHHH'`, because the game would have already ended.)\n",
"- `update`: The function `update(belief, move)` returns an updated belief state, representing all the possible coin sequences that could result from any devil rotation followed by the specified flip(s). (But don't flip `'HHHH'`, because the game would have already ended.)\n",
"- `flip`: The function `flip(coins, move)` flips the specified positions within the coin sequence."
]
},

View File

@ -35,7 +35,7 @@
"* **Location**: A location is a **point** in two-dimensional space (we assume keyboards are flat).\n",
"* **Path**: A path connects the letters in a word. In the picture above the path is curved, but a shortest path is formed by connecting straight line **segments**, so maybe we need only deal with straight lines.\n",
"* **Segment**: A line segment is a straight line between two points.\n",
"* **Length**: Paths and Segments have lengths; the distance travelled along them.\n",
"* **Length**: Paths and Segments have lengths; the distance traveled along them.\n",
"* **Words**: We will need a list of allowable words (in order to find the one with the longest path).\n",
"* **Work Load**: If we want to find the average path length over a typical work load, we'll have to represent a work load: not\n",
"just a list of words, but a measure of how frequent each word (or each segment) is.\n",
@ -2058,7 +2058,7 @@
"* Hillclimbing just keeps the one best keyboard it has found so far. Other optimization techniques such as\n",
"[beam search](http://en.wikipedia.org/wiki/Beam_search) or [genetic algorithms](http://en.wikipedia.org/wiki/Genetic_algorithm) or [ant colony optimization](http://en.wikipedia.org/wiki/Ant_colony_optimization_algorithms) maintain several candidates at a time. Is that a good idea?\n",
"\n",
"* The code in this notebook emphasises clarity, not efficiency. Can you modify the code (or perhaps port it to another language) and make it twice as efficient? 10 times? 100 times?\n",
"* The code in this notebook emphasizes clarity, not efficiency. Can you modify the code (or perhaps port it to another language) and make it twice as efficient? 10 times? 100 times?\n",
"\n",
"* What other factors do you think are important to user satisfaction with a keyboard. Can you measure them?\n",
"\n",

View File

@ -402,7 +402,7 @@
"**No.** The game is a win for the second player, not the first.\n",
"This agrees with [xkcd](https://xkcd.com/)'s Randall Monroe, who [says](https://blog.xkcd.com/2007/12/31/ghost/) *\"I hear if you use the Scrabble wordlist, its always a win for the second player.\"*\n",
"\n",
"But ... Wikipedia says that the minimum word length can be \"three or four letters.\" In `enable1` the limit was three; let's try agian with a limit of four:"
"But ... Wikipedia says that the minimum word length can be \"three or four letters.\" In `enable1` the limit was three; let's try again with a limit of four:"
]
},
{

View File

@ -645,7 +645,7 @@
"\n",
"In Way 1, we could pre-sort the rectangles (say, biggest first). Then we try to put the biggest rectangle in all possible positions on the grid, and for each position that fits, try putting the second biggest rectangle in all remaining positions, and so on. As a rough estimate, assume there are on average about 10 ways to place a rectangle. Then this way will look at about 10<sup>5</sup> = 100,000 combinations.\n",
"\n",
"In Way 2, we consider the positions in some fixed order; say top-to-bottom, left-to right. Take the first empty position (say, the upper left corner). Try putting each of the rectangles there, and for each one that fits, try all possible rectangles in the next empty position, and so on. There are only 5! permutations of rectangles, and each rectangle can go either horizontaly or vertically, so we would have to consider 5! &times; 2<sup>5</sup> = 3840 combinations. Since 3840 &lt; 100,000, I'll go with Way 2. Here is a more precise description:\n",
"In Way 2, we consider the positions in some fixed order; say top-to-bottom, left-to right. Take the first empty position (say, the upper left corner). Try putting each of the rectangles there, and for each one that fits, try all possible rectangles in the next empty position, and so on. There are only 5! permutations of rectangles, and each rectangle can go either horizontally or vertically, so we would have to consider 5! &times; 2<sup>5</sup> = 3840 combinations. Since 3840 &lt; 100,000, I'll go with Way 2. Here is a more precise description:\n",
"\n",
"> Way 2: To `pack` a set of rectangles onto a grid, find the first empty cell on the grid. Try in turn all possible placements of any rectangle (in either orientation) at that position. For each one that fits, try to `pack` the remaining rectangles, and return the resulting grid if one of these packings succeeds. "
]
@ -1239,7 +1239,7 @@
" pass\n",
" \n",
"def replace_all(text, olds, news):\n",
" \"Replace each occurence of each old in text with the corresponding new.\"\n",
" \"Replace each occurrence of each old in text with the corresponding new.\"\n",
" # E.g. replace_all('A + B', ['A', 'B'], [1, 2]) == '1 + 2'\n",
" for (old, new) in zip(olds, news):\n",
" text = text.replace(str(old), str(new))\n",

View File

@ -402,7 +402,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# (2) Count Strings with Alphabetic First Occurences\n",
"# (2) Count Strings with Alphabetic First Occurrences\n",
"\n",
"Here's another problem:\n",
"\n",

View File

@ -1254,7 +1254,7 @@
" \n",
"$P(w_1 \\ldots w_n) = P(w_1 \\mid start) \\times P(w_2 \\mid w_1) \\times P(w_3 \\mid w_2) \\ldots \\times \\ldots P(w_n \\mid w_{n-1})$\n",
"\n",
"This is called the *bigram* model, and is equivalent to taking a text, cutting it up into slips of paper with two words on them, and having multiple bags, and putting each slip into a bag labelled with the first word on the slip. Then, to generate language, we choose the first word from the original single bag of words, and chose all subsequent words from the bag with the label of the previously-chosen word. To determine the probability of a word sequence, we multiply together the conditional probabilities of each word given the previous word. We'll do this with a function, `cPword` for \"conditional probability of a word.\"\n",
"This is called the *bigram* model, and is equivalent to taking a text, cutting it up into slips of paper with two words on them, and having multiple bags, and putting each slip into a bag labeled with the first word on the slip. Then, to generate language, we choose the first word from the original single bag of words, and chose all subsequent words from the bag with the label of the previously-chosen word. To determine the probability of a word sequence, we multiply together the conditional probabilities of each word given the previous word. We'll do this with a function, `cPword` for \"conditional probability of a word.\"\n",
"\n",
"$P(w_n \\mid w_{n-1}) = P(w_{n-1}w_n) / P(w_{n-1}) $"
]
@ -1419,9 +1419,9 @@
}
],
"source": [
"tolkein = 'adrybaresandyholewithnothinginittositdownonortoeat'\n",
"print segment(tolkein)\n",
"print segment2(tolkein)"
"tolkien = 'adrybaresandyholewithnothinginittositdownonortoeat'\n",
"print segment(tolkien)\n",
"print segment2(tolkien)"
]
},
{
@ -1610,7 +1610,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The issue here is the finality of a probability of zero. Out of the three 15-letter words, it turns out that \"nongovernmental\" is in the dictionary, but if it hadn't been, if somehow our corpus of words had missed it, then the probability of that whole phrase would have been zero. It seems that is too strict; there must be some \"real\" words that are not in our dictionary, so we shouldn't give them probability zero. There is also a question of likelyhood of being a \"real\" word. It does seem that \"neverbeforeseen\" is more English-like than \"zqbhjhsyefvvjqc\", and so perhaps should have a higher probability.\n",
"The issue here is the finality of a probability of zero. Out of the three 15-letter words, it turns out that \"nongovernmental\" is in the dictionary, but if it hadn't been, if somehow our corpus of words had missed it, then the probability of that whole phrase would have been zero. It seems that is too strict; there must be some \"real\" words that are not in our dictionary, so we shouldn't give them probability zero. There is also a question of likelihood of being a \"real\" word. It does seem that \"neverbeforeseen\" is more English-like than \"zqbhjhsyefvvjqc\", and so perhaps should have a higher probability.\n",
"\n",
"We can address this by assigning a non-zero probability to words that are not in the dictionary. This is even more important when it comes to multi-word phrases (such as bigrams), because it is more likely that a legitimate one will appear that has not been observed before.\n",
"\n",

View File

@ -839,7 +839,7 @@
"\n",
"- *In `occ(n)`, is it ok to start from all empty houses, rather than considering layouts of partially-occupied houses?* Yes, because the problem states that initially all houses are empty, and each choice of a house breaks the street up into runs of acceptable houses, flanked by unacceptable houses. If we get the computation right for a run of `n` acceptable houses, then we can get the whole answer right. A key point is that the chosen first house breaks the row of houses into 2 runs of *acceptable* houses, not 2 runs of *unoccupied* houses. If it were unoccupied houses, then we would have to also keep track of whether there were occupied houses to the right and/or left of the runs. By considering runs of acceptable houses, eveything is clean and simple.\n",
"\n",
"- *In `occ(7)`, if the first house chosen is 2, that breaks the street up into runs of 1 and 3 acceptable houses. There is only one way to occupy the 1 house, but there are several ways to occupy the 3 houses. Shouldn't the average give more weight to the 3 houses, since there are more possibilities there?* No. We are caclulating occupancy, and there is a specific number (5/3) which is the expected occupancy of 3 houses; it doesn't matter if there is one combination or a million combinations that contribute to that expected value, all that matters is what the expected value is.\n",
"- *In `occ(7)`, if the first house chosen is 2, that breaks the street up into runs of 1 and 3 acceptable houses. There is only one way to occupy the 1 house, but there are several ways to occupy the 3 houses. Shouldn't the average give more weight to the 3 houses, since there are more possibilities there?* No. We are calculating occupancy, and there is a specific number (5/3) which is the expected occupancy of 3 houses; it doesn't matter if there is one combination or a million combinations that contribute to that expected value, all that matters is what the expected value is.\n",
"\n",
"\n"
]

View File

@ -247,7 +247,7 @@
"- We have multiple balls of the same color. \n",
"- An outcome is a *set* of balls, where order doesn't matter, not a *sequence*, where order matters.\n",
"\n",
"To account for the first issue, I'll have 8 different white balls labelled `'W1'` through `'W8'`, rather than having eight balls all labelled `'W'`. That makes it clear that selecting `'W1'` is different from selecting `'W2'`.\n",
"To account for the first issue, I'll have 8 different white balls labeled `'W1'` through `'W8'`, rather than having eight balls all labeled `'W'`. That makes it clear that selecting `'W1'` is different from selecting `'W2'`.\n",
"\n",
"The second issue is handled automatically by the `P` function, but if I want to do calculations by hand, I will sometimes first count the number of *permutations* of balls, then get the number of *combinations* by dividing the number of permutations by *c*!, where *c* is the number of balls in a combination. For example, if I want to choose 2 white balls from the 8 available, there are 8 ways to choose a first white ball and 7 ways to choose a second, and therefore 8 &times; 7 = 56 permutations of two white balls. But there are only 56 / 2 = 28 combinations, because `(W1, W2)` is the same combination as `(W2, W1)`.\n",
"\n",
@ -655,7 +655,7 @@
}
},
"source": [
"So the probabilty of 6 red balls is then just 9 choose 6 divided by the size of the sample space:"
"So the probability of 6 red balls is then just 9 choose 6 divided by the size of the sample space:"
]
},
{

View File

@ -2249,7 +2249,7 @@
}
},
"source": [
"A table and a plot will give a feel for the `util` function. Notice the characterisitc concave-down shape of the plot."
"A table and a plot will give a feel for the `util` function. Notice the characteristics concave-down shape of the plot."
]
},
{

View File

@ -21,7 +21,7 @@
" R: Wotans plan will be fulfilled\n",
" S: Valhalla will be destroyed\n",
"\n",
"For some sentences, it takes detailed knowledge to get a good translation. The following two sentences are ambiguous, with different prefered interpretations, and translating them correctly requires knowledge of eating habits:\n",
"For some sentences, it takes detailed knowledge to get a good translation. The following two sentences are ambiguous, with different preferred interpretations, and translating them correctly requires knowledge of eating habits:\n",
"\n",
" I will eat salad or I will eat bread and I will eat butter. P (Q ⋀ R)\n",
" I will eat salad or I will eat soup and I will eat ice cream. (P Q) ⋀ R\n",
@ -30,7 +30,7 @@
"\n",
" Rule('{P} ⇒ {Q}', 'if {P} then {Q}', 'if {P}, {Q}')\n",
" \n",
"which means that the logic translation will have the form `'P ⇒ Q'`, whenever the English sentence has either the form `'if P then Q'` or `'if P, Q'`, where `P` and `Q` can match any non-empty subsequence of characters. Whatever matches `P` and `Q` will be recursively processed by the rules. The rules are in order&mdash;top to bottom, left to right, and the first rule that matches in that order will be accepted, no matter what, so be sure you order your rules carefully. One guideline I have adhered to is to put all the rules that start with a keyword (like `'if'` or `'neither'`) before the rules that start with a variable (like `'{P}'`); that way you avoid accidently having a keyword swallowed up inside a `'{P}'`.\n",
"which means that the logic translation will have the form `'P ⇒ Q'`, whenever the English sentence has either the form `'if P then Q'` or `'if P, Q'`, where `P` and `Q` can match any non-empty subsequence of characters. Whatever matches `P` and `Q` will be recursively processed by the rules. The rules are in order&mdash;top to bottom, left to right, and the first rule that matches in that order will be accepted, no matter what, so be sure you order your rules carefully. One guideline I have adhered to is to put all the rules that start with a keyword (like `'if'` or `'neither'`) before the rules that start with a variable (like `'{P}'`); that way you avoid accidentally having a keyword swallowed up inside a `'{P}'`.\n",
"\n",
"Consider the example sentence `\"If loving you is wrong, I don't want to be right.\"` This should match the pattern \n",
"`'if {P}, {Q}'` with the variable `P` equal to `\"loving you is wrong\"`. But I don't want the variable `Q` to be \n",
@ -441,7 +441,7 @@
"\n",
"* `nothing is better`:<br>doesn't handle quantifiers.\n",
"\n",
"* `Either Wotan will triumph and Valhalla will be saved or else he won't`:<br>gets `'he will'` as one of the propositions, but better would be if that refered back to `'Wotan will triumph'`.\n",
"* `Either Wotan will triumph and Valhalla will be saved or else he won't`:<br>gets `'he will'` as one of the propositions, but better would be if that referred back to `'Wotan will triumph'`.\n",
"\n",
"* `Wotan will intervene and cause Siegmund's death`:<br>gets `\"cause Siegmund's death\"` as a proposition, but better would be `\"Wotan will cause Siegmund's death\"`.\n",
"\n",

View File

@ -455,7 +455,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Opponent modelling\n",
"## Opponent modeling\n",
"\n",
"To have a chance of winning the second round of this contest, we have to predict what the other entries will be like. Nobody knows for sure, but I can hypothesize that the entries will be slightly better than the first round, and try to approximate that by hillclimbing from each of the first-round plans for a small number of steps:"
]

View File

@ -255,7 +255,7 @@
"\n",
"We'll represent a tile as a one-character string, like `'W'`. We'll represent a rack as a string of tiles, usually of length 7, such as `'EELRTTS'`. (I also considered a `collections.Counter` to represent a rack, but felt that `str` was simpler, and with the rack size limited to 7, efficiency was not a major issue.)\n",
"\n",
"The blank tile causes some complications. We'll represent a blank in a player's rack as the underscore character, `'_'`. But once the blank is played on the board, it must be used as if it was a specific letter. However, it doesn't score the points of the letter. I chose to use the lowercase version of the letter to represent this. That way, we know what letter the blank is standing for, and we can distingush between scoring and non-scoring tiles. For example, `'EELRTT_'` is a rack that contains a blank; and `'LETTERs'` is a word played on the board that uses the blank to stand for the letter `S`. \n",
"The blank tile causes some complications. We'll represent a blank in a player's rack as the underscore character, `'_'`. But once the blank is played on the board, it must be used as if it was a specific letter. However, it doesn't score the points of the letter. I chose to use the lowercase version of the letter to represent this. That way, we know what letter the blank is standing for, and we can distinguish between scoring and non-scoring tiles. For example, `'EELRTT_'` is a rack that contains a blank; and `'LETTERs'` is a word played on the board that uses the blank to stand for the letter `S`. \n",
"\n",
"We'll define `letters` to give all the distinct letters that can be made by a rack, and `remove` to remove letters from a rack (after they have been played)."
]
@ -5491,7 +5491,7 @@
"source": [
"# Playing a Game\n",
"\n",
"Now let's play a complete game. We start with a bag of tiles wih the official Scrabble&reg; distribution:"
"Now let's play a complete game. We start with a bag of tiles with the official Scrabble&reg; distribution:"
]
},
{

View File

@ -21,7 +21,7 @@
"\n",
"And some of those sums, like 8, can be made multiple ways, while 2 and 12 can only be made one way. \n",
"\n",
"**Yeah. 8 can be made 5 ways, so it has a 5/36 probability of occuring.**\n",
"**Yeah. 8 can be made 5 ways, so it has a 5/36 probability of occurring.**\n",
"\n",
"The interesting thing is that people have been playing dice games for 7,000 years. But it wasn't until 1977 that <a href=\"http://userpages.monmouth.com/~colonel/\">Colonel George Sicherman</a> asked whether is is possible to have a pair of dice that are not regular dice&mdash;that is, they don't have (1, 2, 3, 4, 5, 6) on the six sides&mdash;but have the same distribution of sums as a regular pair&mdash;so the pair of dice would also have to have 5 ways of making 8, but it could be different ways; maybe 7+1 could be one way. Sicherman assumes that each side bears a positive integer.\n",
"\n",
@ -483,18 +483,18 @@
"\n",
"<table class=\"wikitable\">\n",
"<tr>\n",
"<td align=\"centre\"></td>\n",
"<td align=\"centre\">2</td>\n",
"<td align=\"centre\">3</td>\n",
"<td align=\"centre\">4</td>\n",
"<td align=\"centre\">5</td>\n",
"<td align=\"centre\">6</td>\n",
"<td align=\"centre\">7</td>\n",
"<td align=\"centre\">8</td>\n",
"<td align=\"centre\">9</td>\n",
"<td align=\"centre\">10</td>\n",
"<td align=\"centre\">11</td>\n",
"<td align=\"centre\">12</td>\n",
"<td align=\"center\"></td>\n",
"<td align=\"center\">2</td>\n",
"<td align=\"center\">3</td>\n",
"<td align=\"center\">4</td>\n",
"<td align=\"center\">5</td>\n",
"<td align=\"center\">6</td>\n",
"<td align=\"center\">7</td>\n",
"<td align=\"center\">8</td>\n",
"<td align=\"center\">9</td>\n",
"<td align=\"center\">10</td>\n",
"<td align=\"center\">11</td>\n",
"<td align=\"center\">12</td>\n",
"</tr>\n",
"<tr>\n",
"<td>Regular dice:\n",

View File

@ -76,7 +76,7 @@
"\n",
"> **All Tours Algorithm**: *Generate all possible tours of the cities, and choose the shortest tour (the one with minimum tour length).*\n",
"\n",
"My design philosophy is to first write an English description of the algorithm, then write Python code that closely mirrors the English description. This will probably require some auxilliary functions and data structures; just assume they exist; put them on a TO DO list, and eventually define them with the same design philosophy.\n",
"My design philosophy is to first write an English description of the algorithm, then write Python code that closely mirrors the English description. This will probably require some auxiliary functions and data structures; just assume they exist; put them on a TO DO list, and eventually define them with the same design philosophy.\n",
"\n",
"Here is the start of the implementation:"
]
@ -2507,7 +2507,7 @@
"Mount Vernon, Fairfax County, Virginia\t38.729314\t-77.107386\n",
"Fort Union Trading Post National Historic Site, Williston, North Dakota 1804, ND\t48.000160\t-104.041483\n",
"San Andreas Fault, San Benito County, CA\t36.576088\t-120.987632\n",
"Chickasaw National Recreation Area, 1008 W 2nd St, Sulphur, OK 73086\t34.457043\t-97.012213\n",
"Chickasaw National Recreation Area, 1008 W 2nd St, Sulfur, OK 73086\t34.457043\t-97.012213\n",
"Hanford Site, Benton County, WA\t46.550684\t-119.488974\n",
"Spring Grove Cemetery, Spring Grove Avenue, Cincinnati, OH\t39.174331\t-84.524997\n",
"Craters of the Moon National Monument & Preserve, Arco, ID\t43.416650\t-113.516650\n",
@ -2605,7 +2605,7 @@
"<img src=\"http://norvig.com/ipython/best-road-trip-major-landmarks.jpg\">\n",
"</a>\n",
"\n",
"The two tours are similar but not the same. I think the difference is that roads through the rockies and along the coast of the Carolinas tend to be very windy, so Randal's tour avoids them, whereas my program assumes staright-line roads and thus includes them. William Cook provides an\n",
"The two tours are similar but not the same. I think the difference is that roads through the rockies and along the coast of the Carolinas tend to be very windy, so Randal's tour avoids them, whereas my program assumes straight-line roads and thus includes them. William Cook provides an\n",
"analysis, and a [tour that is shorter](http://www.math.uwaterloo.ca/tsp/usa50/index.html) than either Randal's or mine.\n",
"\n",
"Now let's go back to the [original web page](http://www.realestate3d.com/gps/latlong.htm) to get a bigger map with over 1000 cities. A shell command fetches the file:"
@ -2792,7 +2792,7 @@
"\n",
"It is time to develop the *greedy algorithm*, so-called because at every step it greedily adds to the tour the edge that is shortest (even if that is not best in terms of long-range planning). The nearest neighbor algorithm always extended the tour by adding on to the end. The greedy algorithm is different in that it doesn't have a notion of *end* of the tour; instead it keeps a *set* of partial segments. Here's a brief statement of the algorithm:\n",
"\n",
"> **Greedy Algorithm:** *Maintain a set of segments; intially each city defines its own 1-city segment. Find the shortest possible edge that connects two endpoints of two different segments, and join those segments with that edge. Repeat until we form a segment that tours all the cities.*\n",
"> **Greedy Algorithm:** *Maintain a set of segments; initially each city defines its own 1-city segment. Find the shortest possible edge that connects two endpoints of two different segments, and join those segments with that edge. Repeat until we form a segment that tours all the cities.*\n",
"\n",
"On each step of the algorithm, we want to \"find the shortest possible edge that connects two endpoints.\" That seems like an expensive operation to do on each step. So we will add in some data structures to enable us to speed up the computation. Here's a more detailed sketch of the algorithm:\n",
"\n",
@ -3431,7 +3431,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"If we have more than 3 cities, how do we split them? My approach is to imagine drawing an axis-aligned rectangle that is just big enough to contain all the cities. If the rectangle is wider than it is tall, then order all the cities by *x* coordiante and split that ordered list in half. If the rectangle is taller than it is wide, order and split the cities by *y* coordinate. "
"If we have more than 3 cities, how do we split them? My approach is to imagine drawing an axis-aligned rectangle that is just big enough to contain all the cities. If the rectangle is wider than it is tall, then order all the cities by *x* coordinate and split that ordered list in half. If the rectangle is taller than it is wide, order and split the cities by *y* coordinate. "
]
},
{

View File

@ -519,7 +519,7 @@
"\n",
"## 17 May 2016\n",
"\n",
"The Thunder finished off the Spurs and beat the Warriors in game 1. Are the Thunder, like the Cavs, peaking at just the right time, after an inconsistant regular season? Is it time for Warriors fans to panic?\n",
"The Thunder finished off the Spurs and beat the Warriors in game 1. Are the Thunder, like the Cavs, peaking at just the right time, after an inconsistent regular season? Is it time for Warriors fans to panic?\n",
"\n",
"Sure, the Warriors were down a game twice in last year's playoffs and came back to win both times. Sure, the Warriors are still 3-1 against the Thunder this year, and only lost two games all season to elite teams (Spurs, Thunder, Cavs, Clippers, Raptors). But the Thunder are playing at a top level. Here's my update, showing that the loss cost the Warriors 5%:"
]

View File

@ -157,7 +157,7 @@ def sreport(species):
if d==glen: d = '>25'
print 'diameter %s for %s (%d elements)' % (d, s, len(c))
SS[d] += 1
print 'Diameters of %d labelled clusters: %s' % (len(set(species)), showh(SS))
print 'Diameters of %d labeled clusters: %s' % (len(set(species)), showh(SS))
def compare(cl1, cl2):
"Compare two lists of clusters"