\n",
"\n",
"# Portmantout Words\n",
"\n",
"A [***portmanteau***](https://en.wikipedia.org/wiki/Portmanteau) is a word that squishes together two words, like *math* + *athlete* = *mathlete*. Inspired by [**Darius Bacon**](http://wry.me/), I covered this as a programming exercise in my 2012 [**Udacity course**](https://www.udacity.com/course/design-of-computer-programs--cs212). In 2018 I was re-inspired by [**Tom Murphy VII**](http://tom7.org/), who added a new twist: [***portmantout words***](http://www.cs.cmu.edu/~tom7/portmantout/) ([***tout***](https://www.duolingo.com/dictionary/French/tout/fd4dc453d9be9f32b7efe838ebc87599) from the French for *all*), which are defined as:\n",
"\n",
"> A **portmantout** of a set of words *W* is a string *S* such that:\n",
"> 1. Every word in *W* is a **substring** of *S*.\n",
"> 2. The words **overlap**: every word (except the first) starts at an index ≤ the end of the previous word.\n",
"> 3. **Nothing else** is in *S*: every letter in *S* comes from the overlapping words. \n",
"\n",
"Note that a word may appear more than once within *S*. Although not part of the definition, the goal is to get as short an *S* as possible, and to do it for a set *W* of over 100,000 words. This notebook develops a program that found these portmanteaux:\n",
"\n",
"\n",
"\n",
"- **preferendumdums** (prefer, referendum, dumdums): agreeable uninformed voters.\n",
"- **fortyphonshore** (forty, typhons, onshore): a dire weather report. \n",
"- **allegestionstage** (alleges, egestions, onstage): a brutal theatre critique.\n",
"- **skymanipulablearsplittingler** (skyman, manipulable, blears, earsplitting, tinglers): a nerve-damaging aviator.\n",
"- **edinburgherselflesslylyricize** (edinburgh, burghers, herself, selflessly, slyly, lyricize): a Scottish music review.\n",
"- **impromptutankhamenability** (impromptu, tutankhamen, amenability): willingness to see the Egyptian exhibit on the spur of the moment.\n",
"- **dashikimonogrammarianarchy** (dashiki, kimono, monogram, grammarian, anarchy): the chaos that ensues when a linguist gets involved in choosing how to enscribe African/Japanese garb. \n",
"\n",
"# Problem-Solving Strategy\n",
"\n",
"My intuition is that finding a shortest *S* is an NP-hard problem, and with 100,000 words to cover, it is unlikely that I can find the shortest possible solution in a reasonable amount of time. A common approach to NP-hard problems is a **greedy algorithm**: make the locally best choice at each step, in the hope that the steps will fit together into a solution that is not too far from the best solution. \n",
"\n",
"Thus, my approach will be to build up a **path**, starting with one word, and then adding **steps** to the end of the evolving path, one at a time. Each step consists of a word from *W* that overlaps the end of the previous word by at least one letter. I will choose the step that seems to be the best choice at the time (the one that minimizes the number of **excess letters** added to the path, and will never undo a step, even if the path seems to get stuck later on. I distinguish two types of steps:\n",
"\n",
"- **Unused word step**: using a word for the first time. Once we use them all, we're done.\n",
"- **Bridging word step**: if no unused word overlaps the previous word, we need to do something to get back on track. I call that something a **bridge**: a step that repeats a previously-used word in order to provide a word ending that matches the start of some unused word. Sometimes two words are required to build a bridge, but never more than two (with our word set). \n",
"\n",
"There's actually a third type of word, but it doesn't need a corresponding type of step: \n",
"- **Subword**: a word that is a substring of another word. If, say, `ajar` is in *W*, then we know we have to place it in some step along the path. But if `jar` is also in *W*, we don't need a separate step for it—whenever we place `ajar`, we have automatically placed `jar`. We can save computation time by initializing the unused words to be the **nonsubwords** in *W*. \n",
"\n",
"A subword will never be added as an unused word, but it may be used as a bridging word. (*Note:* I use the clumsy term \"nonsubword\" rather than \"superword\", because there are a small number of words, like \"cozy\" and \"july,\" that are neither subwords nor superwords.)\n",
"\n",
"Here is the exact definition of the metric we are trying to minimize:\n",
"\n",
"- **Excess letters**: the number of unneccessary letters that a step adds, relative to a baseline model in which all the words are concatenated with no repeated words and no overlap between them. (That's not a valid solution, but it is useful as a benchmark.) So if a step adds an unused word, and overlap it with the previous word by three letters, that is an excess of -3 (a negative excess is a positive thing): I've saved three letters over just concatenating the unused word. For a bridging word, the excess is the number of letters that do not overlap either the previous word or the next word. \n",
"\n",
"**Examples:** In each row of the table below, `'ajar'` is the previous word, but each row makes different assumptions about what unused words remain, and thus we get different choices for the step to take. The table shows the overlapping letters between the previous word and the step, and in the case of bridges, it shows the next unused word that the step is bridging to. The final column shows the excess score (and letters).\n",
"\n",
"|Previous|Step(s)|Overlap|Bridge to|Type of Step|Excess|\n",
"|--------|----|----|---|---|---|\n",
"| ajar|jarring|jar||*unused word* |-3| \n",
"| ajar|arbitrary|ar||*unused word* |-2|\n",
"| ajar|rabbits|r||*unused word*|-1|\n",
"| ajar|argot|ar|goths|*one-step bridge* |0|\n",
"| ajar|arrow|ar|owlets| *one-step bridge*|1 (r)\n",
"| ajar|rani, iraq|r|quizzed| *two-step bridge*|5 (anira) | \n",
"\n",
"Let's go over the examples:\n",
"- **jarring**: Here we assume `jarring` is an unused word, and it overlaps by 3 letters, giving it an excess cost of -3, which is the best possible (an overlap of 4 would mean `ajar` is a subword, and we already agreed to eliminate subwords).\n",
"- **arbitrary** and **rabbits**: unused word that overlap by fewer than 3 letters, so would only be chosen if there were no unused words with more overlap.\n",
"- **argot** and **arrow**: One-step bridges; a bridge with the least excess (non-overlapping letters) would be chosen.\n",
"- **rani, iraq**: a two-step bridge. Suppose `quizzed` is the only remaining unused word. There is no single word that bridges from any suffix of `ajar` to any prefix of `quizzed`. But `rani` can bridge from `'r'` to `'i'` and `iraq` can bridge from `'i'` to `'q'`. This two-word bridge has an excess score of 5 due to the letters `anira` not overlapping anything.\n",
"\n",
"We see that unused word steps always have a negative excess cost (that's good) while bridge steps always have a zero or positive excess cost; thus an unused word step is always better than a bridge step (according to this metric).\n",
"\n",
"# Data Type Implementation\n",
"\n",
"Here I describe how to implement the main data types in Python:\n",
"\n",
"- **Word**: a Python `str` (as are subparts of words, like suffixes or individual letters).\n",
"- **Wordset**: a subclass of `set`, denoting a set of words, plus some cached attributes.\n",
"- **Path**: a Python `list` of steps.\n",
"- **Step**: a named tuple of an overlap and a word. Adding `jarring` to `ajar` is `Step(3, 'jarring')`. \n",
"- **Bridge**: a named tuple of an excess cost followed by a list of one or two steps, e.g. `Bridge(1, [Step(2, 'arrow')])`.\n",
"- **Bridges**: a cached table mapping a prefix and a suffix to a bridge. \n",
"\n",
"Here are two example bridges: \n",
"\n",
" W.bridges['ar']['ow'] == Bridge(1, [Step(2, 'arrow')])\n",
" W.bridges['r']['q'] == Bridge(5, [Step(1, 'rani'), Step(1, 'iraq')])\n",
"\n",
"Here are the data types:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from collections import defaultdict, Counter, namedtuple\n",
"from typing import List, Tuple, Set, Dict, Any\n",
"\n",
"class Wordset(set): \"\"\"A set of words.\"\"\"\n",
"Word = str\n",
"Step = namedtuple('Step', 'overlap, word')\n",
"Bridge = namedtuple('Bridge', 'excess, steps')\n",
"Path = List[Step]\n",
"Bridges = Dict[str, Dict[str, Bridge]]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Overall Program Design\n",
"\n",
"I originally thought I would define a major function, `portman`, to generate the portmantout string *S* from the set of words *W*, and a minor function, `is_portman`, to verify the result. But verification was difficult. For example, *S* = `'helloworld'`, would be rejected as non-overlapping if parsed as `'hello'` + `'world'`, but accepted if parsed as `'hello'` + `'low'` + `'world'`. It was hard for `is_portman` to decide which parse was intended, which is a shame because `portman` *knew* which was intended, as it built up the path of steps, but didn't return the path. \n",
"\n",
"Therefore, I decided on the following calling and [naming](https://en.wikipedia.org/wiki/Natalie_Portman) conventions:\n",
"\n",
" P = natalie(W: Wordset) # Find a portmantout path P for a Wordset W\n",
" S = portman(P: Path) # Compute the string S from the path P\n",
" is_portman(P: Path, W: Wordset) # Check whether P is a valid path covering W\n",
"\n",
"Thus I can generate a string *S* with:\n",
"\n",
" S = portman(natalie(W))\n",
" \n",
"# `portman` and `is_portman`\n",
"\n",
"Here we define the functions `portman` and `is_portman`, and the tiny examples `W1`, `P1`, and `S1`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def portman(P: Path) -> Word:\n",
" \"\"\"Compute the portmantout string S from the path P.\"\"\"\n",
" return ''.join(word[overlap:] for (overlap, word) in P)\n",
"\n",
"def is_portman(P: Path, W: Wordset) -> bool:\n",
" \"\"\"Is the Path P a portmantout of the Wordset W?\"\"\"\n",
" S = portman(P)\n",
" return (all(word in S for word in W) # 1. Every word in W is a substring of S\n",
" and all(step.overlap > 0 # 2. The words overlap\n",
" for step in P[1:]) \n",
" and all(step.word in S # 3. Nothing else is in S\n",
" for step in P)) "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"W1 = Wordset({'anarchy', 'dashiki', 'grammarian', 'kimono', 'monogram',\n",
" 'a', 'am', 'an', 'arc', 'arch', 'aria', 'as', 'ash', 'dash', 'gram', \n",
" 'grammar', 'i', 'mar', 'maria', 'mono', 'narc', 'no', 'on', 'ram'})\n",
"\n",
"P1 = [Step(0, 'dashiki'),\n",
" Step(2, 'kimono'),\n",
" Step(4, 'monogram'),\n",
" Step(4, 'grammarian'),\n",
" Step(2, 'anarchy')]\n",
"\n",
"S1 = portman(P1)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"S1"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"is_portman(P1, W1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# `W`: Murphy's Wordset of 108,709 words \n",
"\n",
"We can make Tom Murphy's 108,709 word file `\"wordlist.asc\"` into a `Wordset` called `W`:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"108709"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"! [ -e wordlist.asc ] || curl -O https://norvig.com/ngrams/wordlist.asc\n",
"\n",
"W = Wordset(open('wordlist.asc').read().split()) \n",
"len(W)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# `natalie`\n",
"\n",
"The function `natalie` does a greedy search for a portmantout path. As stated above, the approach is to start with a path of one word (either given as an optional argument or chosen arbitrarily from the word set *W*), and then repeatedly add steps, each step being either an `unused_step` or one of `bridging_steps`. "
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"def natalie(W: Wordset, start=None) -> Path:\n",
" \"\"\"Return a portmantout path containing all words in W.\"\"\"\n",
" cache_attributes(W)\n",
" P = add_step([], W, Step(0, start or first(W.unused_words)))\n",
" while W.unused_words:\n",
" for step in unused_step(W, P[-1].word) or bridging_steps(W, P[-1].word):\n",
" P = add_step(P, W, step)\n",
" return P"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# `unused_step` and `bridging_steps`\n",
"\n",
"`unused_step` considers every suffix of the previous word, longest suffix first. If a suffix starts an unused words, we choose it. Since we're going longest-suffix first, no other word could do better.\n",
"\n",
"`bridging_steps` also considers every suffix of the previous word, and for each one it looks in the `W.bridges[suf]` table (see below) to see what prefixes (of unused words) we can bridge to from this suffix. Consider all such `W.bridges[suf][pre]` entries that bridge to the prefix of an unused word (as maintained in `W.startswith[pre]`). Out of all such bridges, take one with the minimal excess cost, and return the steps that make up the bridge."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"def unused_step(W: Wordset, prev_word: Word) -> List[Step]:\n",
" \"\"\"Return [Step(overlap, unused_word)] or [].\"\"\"\n",
" for suf in suffixes(prev_word):\n",
" unused_word = first(W.startswith.get(suf, ()))\n",
" if unused_word:\n",
" return [Step(len(suf), unused_word)]\n",
" return []\n",
"\n",
"def bridging_steps(W: Wordset, prev_word: Word) -> List[Step]:\n",
" \"\"\"The steps from the shortest bridge that bridges \n",
" from a suffix of prev_word to a prefix of any unused word.\"\"\"\n",
" bridges = [W.bridges[suf][pre] \n",
" for suf in suffixes(prev_word) if suf in W.bridges\n",
" for pre in W.bridges[suf] if W.startswith[pre]]\n",
" if not bridges: print('prev_word', prev_word)\n",
" return min(bridges).steps"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"(*Python trivia:* in `unused_step` I do `W.startswith.get(suf, ())`, not `W.startswith[suf]` because the dict in question is a `defaultdict(set)`, and if there is no entry there, I don't want to insert an empty set entry.)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# `cache_attributes` and `add_step`\n",
"\n",
"To make this efficient, `cache_attributes(W)` caches the following information:\n",
" - `W.subwords`: a set of all the words that are contained within another word in `W`.\n",
" - `W.shortwords`: a set of short words used to build bridges.\n",
" - `W.unused_words`: initially the set of nonsubwords in `W`; when a word is used it is removed from the set.\n",
" - `W.bridges`: a dict where `W.bridges[suf][pre]` gives the best bridge between the affixes.\n",
" - `W.startswith`: a dict that maps from a prefix to all the unused words that start with the prefix. A word is removed from all the places it appears when it is used. Example: `W.startswith['somet'] == {'something', 'sometimes'}`.\n",
" \n",
"These structures are complicated, so don't be discouraged if you have to go over the code several times. "
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"def cache_attributes(W, maxlen=5, end_letters='qujvz'):\n",
" \"\"\"Precompute and cache data structures on attributes of W:\n",
" .subwords, .shortwords, and .bridges are computed once and not changed. \n",
" .unused_words and .startswith are recomputed on each call to `natalie`,\n",
" and are updated by calls to `add_step`.\"\"\"\n",
" if not hasattr(W, 'bridges'): \n",
" W.subwords = subwords(W)\n",
" W.shortwords = {w for w in W if len(w) <= maxlen + (w[-1] in end_letters)}\n",
" W.bridges = build_bridges(W)\n",
" W.unused_words = W - W.subwords\n",
" W.startswith = startswith_table(W.unused_words)\n",
" \n",
"def add_step(P, W, step) -> Path:\n",
" \"\"\"Add step to P; remove word from `W.unused_words` and `W.startswith[pre] for each pre`.\"\"\"\n",
" P.append(step)\n",
" word = step.word\n",
" assert word in W, f'attempt to add \"{word}\", which is not in the word set'\n",
" if word in W.unused_words:\n",
" W.unused_words.remove(word)\n",
" for pre in prefixes(word):\n",
" W.startswith[pre].remove(word)\n",
" if not W.startswith[pre]:\n",
" del W.startswith[pre]\n",
" return P"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Utility Functions"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"def multimap(pairs) -> Dict[Any, set]:\n",
" \"\"\"Given (key, val) pairs, make a dict of {key: {val,...}}.\"\"\"\n",
" result = defaultdict(set)\n",
" for key, val in pairs:\n",
" result[key].add(val)\n",
" return result\n",
"\n",
"def startswith_table(words) -> Dict[str, Set[Word]]: \n",
" \"\"\"A dict mapping a prefix to all the words it starts:\n",
" {'somet': {'something', 'sometimes'},...}.\"\"\"\n",
" return multimap((pre, w) for w in words for pre in prefixes(w))\n",
"\n",
"def subwords(W: Wordset) -> Set[str]:\n",
" \"\"\"All the words in W that are subparts of some other word.\"\"\"\n",
" return {subword for w in W for subword in subparts(w) & W} \n",
" \n",
"def suffixes(word) -> List[str]:\n",
" \"\"\"All non-empty proper suffixes of word, longest first.\"\"\"\n",
" return [word[i:] for i in range(1, len(word))]\n",
"\n",
"def prefixes(word) -> List[str]:\n",
" \"\"\"All non-empty proper prefixes of word.\"\"\"\n",
" return [word[:i] for i in range(1, len(word))]\n",
"\n",
"def subparts(word) -> Set[str]:\n",
" \"\"\"All non-empty proper substrings of word\"\"\"\n",
" return {word[i:j] \n",
" for i in range(len(word)) \n",
" for j in range(i + 1, len(word) + 1)} - {word}\n",
"\n",
"def first(iterable) -> object:\n",
" \"\"\"The first element in an iterable, or None\"\"\"\n",
" return next(iter(iterable), None)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"(*Math trivia:* In this context, \"proper\" means \"not whole\". A proper subset is a subset that is not the whole set itself; a proper substring of a word is a substring that is not the whole word itself.)\n",
"\n",
"# Building Bridges\n",
"\n",
"The last piece of the program is the construction of the `W.bridges` table. Recall that we want `W.bridges[suf][pre]` to be a bridge between a suffix of the previous word and a prefix of an unused word, as in the examples:\n",
"\n",
" W.bridges['ar']['ow'] == Bridge(1, [Step(2, 'arrow')])\n",
" W.bridges['ar']['c'] == Bridge(0, [Step(2, 'arc')])\n",
" W.bridges['r']['q'] == Bridge(5, [Step(1, 'rani'), Step(1, 'iraq')])\n",
" \n",
"We build all the bridges in `cache_attributes`, and don't update them as words are used. Thus, `W.bridges['r']['q']` says \"if there are any unused words starting with `'q'`, you can use this bridge, but I'm not promising there are any.\" The caller is responsible for checking that `W.startswith['q']` contains unused word(s).\n",
" \n",
"Bridges should be short. We don't need to consider `antidisestablishmentarianism` as a possible bridge word. Instead, from our 108,709 word set *W*, we'll use `W.shortwords`: 10,273 words with length up to 5, plus six-letter words that end in any of 'qujvz', the rarest letters (there are 20 of these). I also compute a `shortstartswith` table for the `shortwords`, where, for example,\n",
"\n",
" shortstartswith['som'] == {'soma', 'somas', 'some'} # but not 'somebodies', 'something', ...\n",
" \n",
"To build one-word bridges, consider every shortword, and split it up in all possible ways into a prefix that will overlap the previous word, a suffix that will overlap the next word, and a count of zero or more excess letters in the middle that don't overlap anything."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"def splits(word) -> List[Tuple[int, str, str]]: \n",
" \"\"\"A sequence of (excess, pre, suf) tuples.\"\"\"\n",
" return [(word[:i], excess, word[i+excess:])\n",
" for excess in range(len(word) - 1)\n",
" for i in range(1, len(word) - excess)]"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[('a', 0, 'rrow'),\n",
" ('ar', 0, 'row'),\n",
" ('arr', 0, 'ow'),\n",
" ('arro', 0, 'w'),\n",
" ('a', 1, 'row'),\n",
" ('ar', 1, 'ow'),\n",
" ('arr', 1, 'w'),\n",
" ('a', 2, 'ow'),\n",
" ('ar', 2, 'w'),\n",
" ('a', 3, 'w')]"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"splits('arrow')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The first element of the list says that `'arrow'` can bridge from `'a'` to `'rrow'` with 0 excess letters; the last says it can bridge from `'a'` to `'w'` with 3 excess letters (which happen to be `'rro'`). Each possible split is passed on to `build_bridge`, which records the bridge in the table under `bridges[pre][suf]` unless there is already a shorter bridge stored there."
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"def build_bridge(bridges, word, pre, excess, suf, step2=None):\n",
" \"\"\"Store a new bridge if it has less excess than the previous bridges[pre][suf].\"\"\"\n",
" if suf not in bridges[pre] or excess < bridges[pre][suf].excess:\n",
" steps = [Step(len(pre), word)]\n",
" if step2: steps.append(step2)\n",
" bridges[pre][suf] = Bridge(excess, steps)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now for two-word bridges. I thought that if I allowed all possible two-word bridges the program would be slow because there would be so many of them, and most of them would be too long to be of any use. Thus, I decided to only use two-word bridges that bridge from the last letter in the previous word to the first letter in an unused word.\n",
"\n",
"We start out the same way, looking at every shortword. But this time we look at every suffix of each shortword, and see if the suffix starts another shortword. If it does, then we have a two-word bridge. Here's the complete `build_bridges` function:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"def build_bridges(W: Wordset):\n",
" \"\"\"A table of bridges[pre][suf] == Bridge(excess, [Step(overlap, word)]), e.g.\n",
" bridges['ar']['c'] == Bridge(0, [Step(2, 'arc')]).\"\"\"\n",
" bridges = defaultdict(dict)\n",
" shortstartswith = startswith_table(W.shortwords)\n",
" # One-word bridges\n",
" for word in W.shortwords: \n",
" for split in splits(word):\n",
" build_bridge(bridges, word, *split)\n",
" # Two-word bridges\n",
" for word1 in W.shortwords:\n",
" for suf in suffixes(word1): \n",
" for word2 in shortstartswith[suf]: \n",
" excess = len(word1) + len(word2) - len(suf) - 2\n",
" A, B = word1[0], word2[-1] # First and last letters\n",
" if A != B: # No sense bridging from A to A\n",
" step2 = Step(len(suf), word2)\n",
" build_bridge(bridges, word1, A, excess, B, step2)\n",
" return bridges"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Failure is Not an Option\n",
"\n",
"Is `natalie` guaranteed to terminate? Every iteration either uses up an unused word, or builds a bridge to an unused word that will be used on the next iteration. So, eventually all the words will be used and `natalie` will return a solution. The only way this can fail to happen is if there is no bridge to an unused word. I can prove that this can't happen if I can verify that there is a bridge from every one-letter suffix to every one-letter prefix. The function `missing_bridges` checks for this."
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"def missing_bridges(W, alphabet='abcdefghijklmnopqrstuvwxyz'):\n",
" \"\"\"What 1-to-1-letter bridges are missing from W.bridges?\"\"\"\n",
" return {(A, B) for A in alphabet for B in alphabet \n",
" if A != B and B not in W.bridges[A]}"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
"cache_attributes(W)\n",
"\n",
"assert not missing_bridges(W)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Great! *W* has no missing bridges. But the tiny *W1* is missing 623 out of 26 × 25 = 650 1-to-1-letter bridges:"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"623"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"cache_attributes(W1)\n",
"\n",
"len(missing_bridges(W1))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Portmantout Solutions\n",
"\n",
"**Finally!** We're ready to make portmantouts. First for the tiny word set `W1`, for which we must carefully choose the starting word:"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Step(overlap=0, word='dashiki'),\n",
" Step(overlap=2, word='kimono'),\n",
" Step(overlap=4, word='monogram'),\n",
" Step(overlap=4, word='grammarian'),\n",
" Step(overlap=2, word='anarchy')]"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"natalie(W1, start='dashiki')"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'dashikimonogrammarianarchy'"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"portman(natalie(W1, start='dashiki'))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now for the big word set `W`:"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 6.81 s, sys: 25.6 ms, total: 6.84 s\n",
"Wall time: 6.84 s\n"
]
}
],
"source": [
"%time P = natalie(W)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"I thought it might take 10 minutes, so under 10 seconds is great!"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(103178, 553669)"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"S = portman(P)\n",
"len(P), len(S)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" The portmantout is about 100,000 steps and a half-million letters long."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Pretty Output\n",
"\n",
"Notice I haven't actually *looked* at the portmantout yet. I didn't want to dump half a million letters into an output cell. Instead, I'll define `report` to print various statistics, summarize the begin and end of the portmantout, and save the full string *S* into the file [natalie.txt](natalie.txt). "
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [],
"source": [
"def report(W, P, steps=100, letters=1000, save='natalie.txt'):\n",
" S = portman(P)\n",
" sub = W.subwords \n",
" nonsub = W - sub\n",
" uniq = {step.word for step in P} # unique step words in P\n",
" bridge = len(P) - len(nonsub) # number of bridge steps in P\n",
" bridges = sum(len(W.bridges[pre]) for pre in W.bridges) # number of bridges in W\n",
" def L(words): return sum(map(len, words)) # Number of letters\n",
" print(f'W has {len(W):,d} words ({len(nonsub):,d} nonsubwords; {len(sub):,d} subwords).')\n",
" print(f'P has {len(P):,d} steps ({len(uniq):,d} unique words; {bridge:,d} bridge words).')\n",
" print(f'S has {len(S):,d} letters; W has {L(W):,d}; nonsubs have {L(nonsub):,d}.')\n",
" print(f'P has an average overlap of {(L(s.word for s in P)-len(S))/(len(P)-1):.2f} letters.')\n",
" print(f'S has a compression ratio (letters(W)/letters(S)) of {L(W)/len(S):.2f}.')\n",
" print(f'P (and thus S) is {\"\" if is_portman(P, W) else \"NOT \"}a valid portmantout of W.')\n",
" print(f'W has {bridges:,d} bridges from {len(W.shortwords):,d} shortwords, '\n",
" f'and {len(missing_bridges(W))} missing 1-to-1-letter bridges.')\n",
" open(save, \"w\").write(S)\n",
" print(f'S saved as the file \"{save}\".')\n",
" items = ['\\n...' if w is ... else w[:i] + '⋅' + w[i:]\n",
" for i, w in P[:steps] + [(..., ...)] + P[-steps:]]\n",
" print(f'\\nThe first and last {letters} letters:\\n\\n{S[:letters]}\\n...{S[-letters:]}')\n",
" print(f'\\nThe first and last {steps} steps:\\n\\n{\", \".join(items)[1:]}')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The step `Step(1, 'sir')` is printed as `s⋅ir` to indicate that `s` is the 1-letter overlap.\n",
"\n",
"I will redefine `is_portman` to be faster. *Python trivia:* if `X, Y` and `Z` are sets, `X <= Y <= Z` means \"is `X` a subset of `Y` and `Y` a subset of `Z`?\" We use the notation here to say that the set of words in *P* must contain all the nonsubwords and can only contain words from *W*."
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [],
"source": [
"def is_portman(P: Path, W: Wordset) -> str:\n",
" \"\"\"Verify that P forms a valid portmantout string for W.\"\"\"\n",
" all_words = (W - W.subwords) <= set(step.word for step in P) <= W\n",
" overlaps = all((overlap > 0 and P[i - 1].word[-overlap:] == word[:overlap])\n",
" for i, (overlap, word) in enumerate(P[1:], 1)) and P[0].overlap == 0\n",
" return all_words and overlaps"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"W has 108,709 words (64,389 nonsubwords; 44,320 subwords).\n",
"P has 103,178 steps (65,051 unique words; 38,789 bridge words).\n",
"S has 553,669 letters; W has 931,823; nonsubs have 595,805.\n",
"P has an average overlap of 1.65 letters.\n",
"S has a compression ratio (letters(W)/letters(S)) of 1.68.\n",
"P (and thus S) is a valid portmantout of W.\n",
"W has 56,477 bridges from 10,293 shortwords, and 0 missing 1-to-1-letter bridges.\n",
"S saved as the file \"natalie.txt\".\n",
"\n",
"The first and last 1000 letters:\n",
"\n",
"circumspectionicitywidelysiannealerstwhiledinburgherselfnessentiallylsubtenanciescortinglesbianschlussrefectionisinglassertionstagehandsomenessayingsandpileupstairstreamiestrousseaustrianswersatzestfulnessesquicentenniallyricistsktskedgingivalutassellingulauncheddarseniousheredityrannosaursinewyvernshortcakestrelsesquipedalianastigmatickingsidelinesmanganesiannihilatorsoesophagussiestashesitancestriesteemskulledificesspoolstuntsaristsarismskepticsanatoriumswappingrassessableaterserializingersarsaparillaserdiskswallowingspansophiesteeminglershortchangesticulationshoredistrictsarinasmuchnessencesuraeronauticallysergichthyologynecologicallyriformaldehydescriberserksamphirescindeduciblentossupshiftsimmeshinglessenedithersiticalvingloriouslylysinecurescheduledemasculinizedscullerythrocytestatrixescritoiresharpenedginessentiallopathsolariatediouslyerbassettinglershorteddieselsynspiraeastboundlessnessayeditorialistablerstillbirthstoneselflessnessayistsardomshrubbiersnakierasurestringingeredis\n",
"...ymenoiraquothaquandariesiraquarreledeliraquieteningorkiraquainteraniraquartzesiraquarrelersiraquarterdecksiraquondamaquickiesiraquagsiraquailedeliraqindarsiraquarrellersiraquartilesiraqophsiraquailsiraquantityoniraquaffersiraquakersiraquiversiraquartsiraquakedeliraquotidianoiraquietensiraquantifiesiraquackishlyoniraquaintnessiraquebecolloquorumsiraquilledeliraquittorsiraquirksiraquicksilveraniraquicklimemiraquantitiesiraquackieraniraquotersiraquintalsiraquintillionthsiraquietusesiraquagmiryoniraquintanoiraquackismsiraquotationallyoniraquantizedeliraqueeringorkiraquantedeliraquackingorkiraquarantinablemiraquarriesiraquackedeliraquarterbacksiraquakiestuqueuesiraquoinedeliraquailingorkiraquintillionsiraquahogsiraquiveringlyoniraquaveredeliraquaaludesiraquakerismaquintuplicatingorkiraquininesiraquichesiraquickeningorkiraquackyoniraquackeryoniraquaggyoniraquakieraniraquixoticallyoniraquintupledeliraquirkinessiraquincunxesiraquietismsiraquirkedeliraquittancesiraquoinsiraqueazyoniraquantified\n",
"\n",
"The first and last 100 steps:\n",
"\n",
"circumspection, ion⋅icity, city⋅wide, wide⋅ly, ely⋅sian, an⋅nealers, ers⋅twhile, while⋅d, ed⋅inburgh, burgh⋅ers, hers⋅elf, self⋅ness, ess⋅entially, ally⋅ls, s⋅ubtenancies, es⋅corting, ting⋅les, les⋅bians, ans⋅chluss, uss⋅r, r⋅efection, ion⋅ising, ising⋅lass, glass⋅er, asser⋅tions, ons⋅tage, stage⋅hands, hands⋅omeness, ess⋅aying, saying⋅s, s⋅andpile, pile⋅ups, ups⋅tairs, airs⋅tream, stream⋅iest, est⋅rous, trous⋅seaus, aus⋅trians, ans⋅wers, ers⋅atzes, zes⋅tfulness, fulness⋅es, ses⋅quicentennially, ly⋅ricists, ts⋅ktsked, ked⋅ging, ging⋅ival, val⋅utas, tas⋅selling, ling⋅ula, la⋅unched, ched⋅dars, ars⋅enious, us⋅hered, hered⋅ity, ty⋅rannosaurs, urs⋅ine, sine⋅wy, wy⋅verns, s⋅hortcakes, kes⋅trels, els⋅es, ses⋅quipedalian, lian⋅as, anas⋅tigmatic, tic⋅kings, kings⋅ide, side⋅lines, lines⋅man, man⋅ganesian, an⋅nihilators, tors⋅oes, oes⋅ophagus, gus⋅sies, sies⋅tas, stas⋅hes, hes⋅itance, ance⋅stries, es⋅teems, s⋅kulled, ed⋅ifices, ces⋅spools, s⋅tunts, ts⋅arists, ts⋅arisms, s⋅keptics, s⋅anatoriums, s⋅wapping, ping⋅rasses, asses⋅sable, ble⋅aters, ters⋅er, ser⋅ializing, zing⋅ers, s⋅arsaparillas, las⋅erdisks, s⋅wallowing, wing⋅spans, pans⋅ophies, es⋅teeming, \n",
"..., g⋅orki, i⋅raq, q⋅uanted, d⋅eli, i⋅raq, q⋅uacking, g⋅orki, i⋅raq, q⋅uarantinable, e⋅mir, ir⋅aq, q⋅uarries, s⋅ir, ir⋅aq, q⋅uacked, d⋅eli, i⋅raq, q⋅uarterbacks, s⋅ir, ir⋅aq, q⋅uakiest, t⋅uque, que⋅ues, s⋅ir, ir⋅aq, q⋅uoined, d⋅eli, i⋅raq, q⋅uailing, g⋅orki, i⋅raq, q⋅uintillions, s⋅ir, ir⋅aq, q⋅uahogs, s⋅ir, ir⋅aq, q⋅uiveringly, y⋅oni, i⋅raq, q⋅uavered, d⋅eli, i⋅raq, q⋅uaaludes, s⋅ir, ir⋅aq, q⋅uakerism, m⋅aqui, qui⋅ntuplicating, g⋅orki, i⋅raq, q⋅uinines, s⋅ir, ir⋅aq, q⋅uiches, s⋅ir, ir⋅aq, q⋅uickening, g⋅orki, i⋅raq, q⋅uacky, y⋅oni, i⋅raq, q⋅uackery, y⋅oni, i⋅raq, q⋅uaggy, y⋅oni, i⋅raq, q⋅uakier, r⋅ani, i⋅raq, q⋅uixotically, y⋅oni, i⋅raq, q⋅uintupled, d⋅eli, i⋅raq, q⋅uirkiness, s⋅ir, ir⋅aq, q⋅uincunxes, s⋅ir, ir⋅aq, q⋅uietisms, s⋅ir, ir⋅aq, q⋅uirked, d⋅eli, i⋅raq, q⋅uittances, s⋅ir, ir⋅aq, q⋅uoins, s⋅ir, ir⋅aq, q⋅ueazy, y⋅oni, i⋅raq, q⋅uantified\n"
]
}
],
"source": [
"report(W, P)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Questions\n",
"\n",
"The program is complete, but there are still many interesting things to explore, and questions to answer.\n",
"\n",
"**Question: is there an imbalance in starting and ending letters of words?** That could lead to a need for many two-word bridges. We saw in the last 100 steps of *P* multiple repetitions of the two-word bridge \"s⋅ir, ir⋅aq\". That suggests there are too many words that end in \"s\" and too many that start with \"q\". Let's investigate:"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"64389"
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"cache_attributes(W)\n",
"len(W.unused_words)"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Letter Starts Ends Ratio\n",
"------ ------ ------ -----\n",
" a 3,528 384 9:1\n",
" b 3,776 6 629:1\n",
" c 5,849 908 6:1\n",
" d 4,093 7,520 1:2\n",
" e 2,470 3,215 1:1\n",
" f 2,794 51 55:1\n",
" g 2,177 6,343 1:3\n",
" h 2,169 351 6:1\n",
" i 2,771 128 22:1\n",
" j 638 0 1:0\n",
" k 566 157 4:1\n",
" l 1,634 1,182 1:1\n",
" m 3,405 657 5:1\n",
" n 1,542 1,860 1:1\n",
" o 1,797 113 16:1\n",
" p 4,977 123 40:1\n",
" q 330 0 1:0\n",
" r 3,811 1,994 2:1\n",
" s 7,388 29,056 1:4\n",
" t 3,097 2,107 1:1\n",
" u 2,557 11 232:1\n",
" v 1,032 6 172:1\n",
" w 1,561 42 37:1\n",
" x 51 68 1:1\n",
" y 207 8,086 1:39\n",
" z 169 21 8:1\n"
]
}
],
"source": [
"starts = Counter(w[0] for w in W.unused_words)\n",
"ends = Counter(w[-1] for w in W.unused_words)\n",
"\n",
"def ratio(L) -> str:\n",
" \"\"\"Approximate ratio of words that start with L to words that end with L.\"\"\"\n",
" s, e = starts[L], ends[L]\n",
" return f'{round(s/e)}:1' if (s > e and e != 0) else f'1:{round(e/s)}'\n",
"\n",
"print('Letter Starts Ends Ratio')\n",
"print('------ ------ ------ -----')\n",
"for L in sorted(starts):\n",
" print(f'{L:>5} {starts[L]:6,d} {ends[L]:6,d} {ratio(L):>5}')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Yes, there is a problem: there are many more words that start with `b`, `f`, `p`, `u`, `u` and `v` than that end with those letters. In the other direction 45% of all words end in `s`, but only a quarter of that number start with `s`. The start:end ratio for `y` is 1:39."
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0.451257202317166"
]
},
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"ends['s'] / len(W.unused_words)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Question: what are the most common words in *P*?** These will be bridge words. What do they have in common?"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[('sac', 3172),\n",
" ('so', 2212),\n",
" ('lyre', 1655),\n",
" ('of', 1651),\n",
" ('dab', 1622),\n",
" ('gab', 1498),\n",
" ('sun', 1491),\n",
" ('sin', 1427),\n",
" ('sip', 1214),\n",
" ('yam', 1076),\n",
" ('sew', 1000),\n",
" ('lye', 730),\n",
" ('spa', 602),\n",
" ('gun', 500),\n",
" ('erst', 486),\n",
" ('yen', 471),\n",
" ('type', 463),\n",
" ('go', 401),\n",
" ('econ', 399),\n",
" ('she', 395),\n",
" ('semi', 356),\n",
" ('yep', 331),\n",
" ('gap', 328),\n",
" ('sex', 317),\n",
" ('simp', 317)]"
]
},
"execution_count": 28,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"Counter(step.word for step in P).most_common(25)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Indeed, bridging away from `s` is a big concern (half of the top dozen bridges). Even though `sir` and `iraq` dominated the last 50 steps, that's not true of `P` overall. Also, `lyre` and `lye` bridge away from an adverb ending, `ly`. \n",
"\n",
"I'm surprised that `of` shows up so frequently. Let's see what it is bridging from:"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[('so', 1210),\n",
" ('go', 203),\n",
" ('do', 190),\n",
" ('to', 31),\n",
" ('maleficio', 1),\n",
" ('whereto', 1),\n",
" ('mexico', 1),\n",
" ('vitro', 1),\n",
" ('modulo', 1),\n",
" ('monaco', 1),\n",
" ('poco', 1),\n",
" ('proximo', 1),\n",
" ('pronto', 1),\n",
" ('vulgo', 1),\n",
" ('puerto', 1),\n",
" ('pizzicato', 1),\n",
" ('furioso', 1),\n",
" ('fresno', 1),\n",
" ('franco', 1),\n",
" ('francisco', 1),\n",
" ('fortissimo', 1)]"
]
},
"execution_count": 29,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"Counter(P[i-1].word for i, step in enumerate(P) if step.word == 'of').most_common()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We see that `of` is used in two-word bridges with `so`, `go`, `do` and `to` to bridge away from four letters with a surplus of ends-with over starts-with."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Question: What is the distribution of word lengths?** "
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Counter({3: 2,\n",
" 4: 186,\n",
" 5: 1796,\n",
" 6: 4364,\n",
" 7: 8672,\n",
" 8: 11964,\n",
" 9: 11950,\n",
" 10: 8443,\n",
" 11: 6093,\n",
" 12: 4423,\n",
" 13: 2885,\n",
" 14: 1765,\n",
" 15: 1017,\n",
" 16: 469,\n",
" 17: 198,\n",
" 18: 91,\n",
" 19: 33,\n",
" 20: 22,\n",
" 21: 9,\n",
" 22: 4,\n",
" 23: 2,\n",
" 28: 1})"
]
},
"execution_count": 30,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"Counter(sorted(map(len, W.unused_words))) # Counter of word lengths"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Question: What is the longest word?** "
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'antidisestablishmentarianism'"
]
},
"execution_count": 31,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"max(W, key=len)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Question: What is the distribution of letters in the Wordset?**"
]
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[('e', 68038),\n",
" ('s', 60080),\n",
" ('i', 53340),\n",
" ('a', 43177),\n",
" ('n', 42145),\n",
" ('r', 41794),\n",
" ('t', 38093),\n",
" ('o', 35027),\n",
" ('l', 32356),\n",
" ('c', 23100),\n",
" ('d', 22448),\n",
" ('u', 19898),\n",
" ('g', 17815),\n",
" ('p', 16128),\n",
" ('m', 16062),\n",
" ('h', 12673),\n",
" ('y', 11889),\n",
" ('b', 11581),\n",
" ('f', 7885),\n",
" ('v', 5982),\n",
" ('k', 4892),\n",
" ('w', 4880),\n",
" ('z', 2703),\n",
" ('x', 1677),\n",
" ('j', 1076),\n",
" ('q', 1066)]"
]
},
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"Counter(L for w in W.unused_words for L in w).most_common() # Counter of letters"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Question: How many bridges are there?** "
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"56477"
]
},
"execution_count": 33,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Make a list of all bridges, B\n",
"B = [W.bridges[suf][pre] for suf in W.bridges for pre in W.bridges[suf]]\n",
"len(B)"
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Bridge(excess=0, steps=[Step(overlap=1, word='umbel')]),\n",
" Bridge(excess=0, steps=[Step(overlap=1, word='circe')]),\n",
" Bridge(excess=0, steps=[Step(overlap=1, word='pin')]),\n",
" Bridge(excess=1, steps=[Step(overlap=1, word='frosh')]),\n",
" Bridge(excess=0, steps=[Step(overlap=1, word='top')]),\n",
" Bridge(excess=1, steps=[Step(overlap=1, word='mixed')]),\n",
" Bridge(excess=0, steps=[Step(overlap=1, word='ben')]),\n",
" Bridge(excess=0, steps=[Step(overlap=2, word='beer')]),\n",
" Bridge(excess=1, steps=[Step(overlap=1, word='spurs')]),\n",
" Bridge(excess=0, steps=[Step(overlap=2, word='boffo')]),\n",
" Bridge(excess=0, steps=[Step(overlap=1, word='yuks')]),\n",
" Bridge(excess=1, steps=[Step(overlap=1, word='herby')]),\n",
" Bridge(excess=0, steps=[Step(overlap=1, word='kiwis')]),\n",
" Bridge(excess=1, steps=[Step(overlap=2, word='micro')]),\n",
" Bridge(excess=1, steps=[Step(overlap=1, word='idles')]),\n",
" Bridge(excess=0, steps=[Step(overlap=3, word='lingo')]),\n",
" Bridge(excess=0, steps=[Step(overlap=3, word='lulu')]),\n",
" Bridge(excess=1, steps=[Step(overlap=2, word='kill')]),\n",
" Bridge(excess=0, steps=[Step(overlap=2, word='joker')]),\n",
" Bridge(excess=0, steps=[Step(overlap=3, word='swop')]),\n",
" Bridge(excess=1, steps=[Step(overlap=2, word='yang')]),\n",
" Bridge(excess=1, steps=[Step(overlap=3, word='dater')]),\n",
" Bridge(excess=0, steps=[Step(overlap=4, word='notre')]),\n",
" Bridge(excess=0, steps=[Step(overlap=3, word='biffy')]),\n",
" Bridge(excess=1, steps=[Step(overlap=3, word='eland')]),\n",
" Bridge(excess=0, steps=[Step(overlap=4, word='aglee')]),\n",
" Bridge(excess=0, steps=[Step(overlap=4, word='forma')]),\n",
" Bridge(excess=0, steps=[Step(overlap=4, word='lapel')]),\n",
" Bridge(excess=0, steps=[Step(overlap=4, word='kafir')])]"
]
},
"execution_count": 34,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"B[::2000] # Sample every 2000th bridge"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Question: How many excess letters do the bridges have?** "
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Counter({0: 37189, 1: 16708, 2: 2425, 3: 95, 5: 21, 4: 32, 6: 6, 8: 1})"
]
},
"execution_count": 35,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Counter of bridge excess letters\n",
"BC = Counter(b.excess for b in B)\n",
"BC"
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0.3916638631655364"
]
},
"execution_count": 36,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from statistics import mean\n",
"\n",
"mean(BC.elements())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Question: How many 1-step and 2-step bridges are there?**"
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Counter({1: 56327, 2: 150})"
]
},
"execution_count": 37,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"Counter(len(b.steps) for b in B)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Question: What strange letter combinations are there?** Let's look at two-letter suffixes or prefixes that only appear in one or two nonsubwords. "
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'tc': {'tchaikovsky'},\n",
" 'hd': {'hdqrs'},\n",
" 'qa': {'qaids', 'qatar'},\n",
" 'uf': {'ufos'},\n",
" 'qo': {'qophs'},\n",
" 'ik': {'ikebanas', 'ikons'},\n",
" 'if': {'iffiness'},\n",
" 'ez': {'ezekiel'},\n",
" 'ip': {'ipecacs'},\n",
" 'mc': {'mcdonald'},\n",
" 'bw': {'bwanas'},\n",
" 'fb': {'fbi'},\n",
" 'gw': {'gweducks', 'gweducs'},\n",
" 'sf': {'sforzatos'},\n",
" 'ek': {'ekistics'},\n",
" 'jn': {'jnanas'},\n",
" 'xm': {'xmases'},\n",
" 'ay': {'ayahs', 'ayatollahs'},\n",
" 'kw': {'kwachas', 'kwashiorkor'},\n",
" 'ym': {'ymca'},\n",
" 'yc': {'ycleped', 'yclept'},\n",
" 'll': {'llamas', 'llanos'},\n",
" 'aj': {'ajar'},\n",
" 'zl': {'zlotys'},\n",
" 'iv': {'ivories', 'ivory'},\n",
" 'ie': {'ieee'},\n",
" 'dv': {'dvorak'},\n",
" 'xi': {'xiphoids', 'xiphosuran'},\n",
" 'wu': {'wurzel'},\n",
" 'ee': {'eelgrasses', 'eelworm'},\n",
" 'zw': {'zwiebacks'},\n",
" 'gj': {'gjetosts'},\n",
" 'ct': {'ctrl'},\n",
" 'pf': {'pfennigs'},\n",
" 'dn': {'dnieper'},\n",
" 'oj': {'ojibwas'},\n",
" 'fj': {'fjords'}}"
]
},
"execution_count": 38,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"{pre: W.startswith[pre] # Rare two-letter prefixes\n",
" for pre in W.startswith if len(pre) == 2 and len(W.startswith[pre]) <= 2}"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'ui': {'maqui', 'prosequi'},\n",
" 'ua': {'joshua'},\n",
" 'nx': {'bronx', 'meninx'},\n",
" 'hm': {'microhm'},\n",
" 'lu': {'honolulu'},\n",
" 'ec': {'filespec', 'quebec'},\n",
" 'ud': {'aloud', 'overproud'},\n",
" 'yx': {'styx'},\n",
" 'tl': {'peyotl', 'shtetl'},\n",
" 'xe': {'deluxe', 'maxixe'},\n",
" 'ep': {'asleep', 'shlep'},\n",
" 'td': {'retd'},\n",
" 'oi': {'hanoi', 'polloi'},\n",
" 'zt': {'liszt'},\n",
" 'gm': {'apophthegm'},\n",
" 'eh': {'mikveh', 'yahweh'},\n",
" 'nc': {'dezinc', 'quidnunc'},\n",
" 'mt': {'daydreamt', 'undreamt'},\n",
" 'ao': {'chiao', 'ciao'},\n",
" 'wa': {'kiowa', 'okinawa'},\n",
" 'su': {'shiatsu'},\n",
" 'zo': {'diazo', 'palazzo'},\n",
" 'xo': {'convexo'},\n",
" 'mb': {'clomb', 'whitecomb'},\n",
" 'ob': {'blowjob'},\n",
" 'pa': {'tampa'},\n",
" 'ku': {'haiku'},\n",
" 'vo': {'concavo'},\n",
" 'fa': {'khalifa'},\n",
" 'zm': {'transcendentalizm'},\n",
" 'oe': {'monroe'},\n",
" 'bm': {'ibm', 'icbm'},\n",
" 'dt': {'rembrandt'},\n",
" 'uc': {'caoutchouc'},\n",
" 'ko': {'gingko', 'stinko'},\n",
" 'ab': {'skylab'},\n",
" 'sr': {'ussr'},\n",
" 'ou': {'thankyou'},\n",
" 'za': {'organza'},\n",
" 'lm': {'stockholm', 'unhelm'},\n",
" 'dn': {'haydn'},\n",
" 'hn': {'mendelssohn'},\n",
" 'ho': {'groucho'},\n",
" 'hu': {'buchu'},\n",
" 'mp': {'prestamp'},\n",
" 'ug': {'bedrug', 'sparkplug'},\n",
" 'xs': {'duplexs'},\n",
" 'sz': {'grosz'},\n",
" 'we': {'zimbabwe'},\n",
" 'tu': {'impromptu'},\n",
" 'aa': {'markkaa'},\n",
" 'rf': {'waldorf', 'windsurf'},\n",
" 'ji': {'fiji'},\n",
" 'ai': {'bonsai'},\n",
" 'po': {'troppo'},\n",
" 'ef': {'unicef'},\n",
" 'gn': {'champaign'},\n",
" 'ub': {'beelzebub'},\n",
" 'vt': {'govt'},\n",
" 'ru': {'nehru'},\n",
" 'rb': {'cowherb'},\n",
" 'nu': {'vishnu'},\n",
" 'nz': {'franz'},\n",
" 'oz': {'kolkhoz'},\n",
" 'hr': {'kieselguhr'},\n",
" 'ln': {'lincoln'},\n",
" 'cd': {'recd'}}"
]
},
"execution_count": 39,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"endswith = multimap((w[-2:], w) for w in W.unused_words)\n",
"\n",
"{suf: endswith[suf] # Rare two-letter suffixes\n",
" for suf in endswith if len(endswith[suf]) <= 2}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The two-letter prefixes definitely include some strange words.\n",
"\n",
"The list of two-letter suffixes is mostly picking out proper names and pointing out flaws in the word list. For example, lots of words end in `ab`: blab, cab, dab, gab, jab, lab, etc. But must of them are subwords of plural forms; only `skylab` made it into the word list in singular form but not plural."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Comparison to Tom Murphy's Program\n",
"\n",
"To compare my [program](portman.py) to [Tom Murphy's](https://sourceforge.net/p/tom7misc/svn/HEAD/tree/trunk/portmantout/): \n",
"- I used a greedy approach that builds up a single long portmanteau, one step at a time. \n",
"- Murphy first built a pool of smaller portmanteaux, then greedily joined them all together. \n",
"\n",
"I'm reminded of the [Traveling Salesperson Problem](TSP.ipynb) where one algorithm is to form a single path, always progressing to the nearest neighbor, and another algorithm is to maintain a pool of shorter segments and repeatedly join together the two closest segments. The two approaches are different, but they are both suboptimal greedy methods, andit is not clear whether one is better than the other. You could try it!\n",
"\n",
"(*English trivia:* my program builds a single path of words, and when the path gets stuck and I need something to allow me to continue, it makes sense to call that thing a **bridge**. Murphy's program starts by building a large pool of small portmanteaux that he calls **particles**, and when he can build no more particles, his next step is to put two particles together, so he calls it a **join**. The different metaphors for what our programs are doing lead to different terminology for the same idea.)\n",
"\n",
"In terms of implementation:\n",
"- I used Python (139 lines for the program without the exploratory questions).\n",
"- Murphy used C++ (1867 lines), with a lot of extra functionality I didn't do: generating diagrams and animations, and running multiple threads in parallel. \n",
"\n",
"It appears Murphy perhaps didn't quite have the complete concept of **subwords**. He did mention that when he adds `'bulleting'`, he crosses `'bullet'` and `'bulletin'` off the list, but somehow [his string](http://tom7.org/portmantout/murphy2015portmantout.pdf) contains both `'spectacular'` and `'spectaculars'`. My guess is that when he adds `'spectaculars'` he crosses off `'spectacular'`, but if he happens to add `'spectacular'` first, he will later add `'spectaculars'`. Support for this view is that his output in `bench.txt` says \"I skipped 24319 words that were already substrs\", but I computed that there are 44,320 such subwords; he found about half of them. I think those missing 20,001 words are the main reason why my strings are coming in at around 554,000 letters; less than Murphy's 611,820 letters.\n",
"\n",
"Also, Murphy's joins are always between one-letter prefixes and suffixes. I do the same thing for two-word bridges, because having a `W.bridges[A][B]` for every letter `A` and `B` is the easiest way to prove that the program will terminate. But for one-word bridges, I allow prefixes and suffixes of any length up to a total of 6 for `len(pre) + len(suf)`. I can get away with this because I limited my candidate pool to the 10,000 `W.shortwords`. It would have been time-consuming to build all bridges for all 100,000 words, and probably would not have helped shorten *S* appreciably.\n",
"\n",
"I should say that I stole one important trick from Murphy. After I finished the first version of my program, I looked at his highly-entertaining [video](https://www.youtube.com/watch?time_continue=1&v=QVn2PZGZxaI) and [paper](http://tom7.org/portmantout/murphy2015portmantout.pdf) and I noticed that I had a problem in my use of bridges. My `natalie` function originally contained something like this: \n",
"\n",
" ... unused_step(...) or one_word_bridge(...) or two_word_bridge(...)\n",
" \n",
"That is, I only considered two-word bridges when there was no one-word bridge, on the assumption that one word is shorter than two. But Murphy showed that my assumption was wrong: for `bridges['w']['c']` I had `'workaholic'`, the best one-word bridge, but he had the two-word bridge `'war' + 'arc' = 'warc'`, which saves six excess letters over my single word. After seeing that, I shamelessly copied his approach, and now I too get a four-letter `bridges['w']['c']` (sometimes `'war' + 'arc'` and sometimes `'wet' + 'etc'` or `'we' + 'etc'`)."
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Bridge(excess=2, steps=[Step(overlap=1, word='war'), Step(overlap=2, word='arc')])"
]
},
"execution_count": 40,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"W.bridges['w']['c']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Conclusion\n",
"\n",
"I'll stop here, but you should feel free to do more experimentation of your own. \n",
"\n",
"Here are some things you could do to make the portmantouts more interesting:\n",
"\n",
"- Use linguistic resources (such as [pretrained word embeddings](https://nlp.stanford.edu/projects/glove/)) to teach your program what words are related to each other. Encourage the program to place related words next to each other. Maybe even make grammatical sentences.\n",
"- Use linguistic resources (such as [NLTK](https://github.com/nltk/)) to teach your program where syllable breaks are in words, and what each syllable sounds like. Encourage the program to make overlaps match syllables. (That's why \"preferendumdums\" sounds better than \"fortyphonshore\".)\n",
"\n",
"Here are some things you could do to make *S* shorter:\n",
"\n",
"- **Lookahead**: Unused words are chosen based on the degree of overlap, but nothing else. It might help to prefer unused words which have a suffix that matches the prefix of another unused word. A single-word lookahead or a beam search could be used.\n",
"- **Reserving words**: It seems like `haydn` and `dnieper` are made to go together in that order; they're the only two words with `dn` as an affix. Similarly, `womenfolk` and `menfolks` should go together in that order, for a 7-letter overlap. But if we happened to place `dnieper` or `menfolks` first, we would loose the chance of these nice overlaps. Maybe there could be a system that assures the proper ordering, or a preprocessing step that joins together words that go together uniquely well. \n",
"- **Word choice ordering**: Perhaps `startswith_table` could sort the words in each key's bucket so that the \"difficult\" words (say, the ones that end in unusual letters) are encountered earlier in the program's execution, when there are more available words for them to connect to.\n",
"- **Learning**: The greedy approach minimizes the number of excess letters for each step. But some words are harder to place than others. Instead of just minimizing the excess, consider also the *expected* excess of each word, which could be learned by averaging over several random runs of `natalie`. \n",
" \n",
"Here are some things you could do to make the program more robust:\n",
"\n",
"- Write and run unit tests.\n",
"- Find other word lists, perhaps in other languages, and try the program on them.\n",
"- Consider what to do for a wordset that has missing bridges. You could try three-word bridges, you could allow the program to back up and remove a previously-placed word; you could allow the addition of words to the start as well as the end of `P`."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 4
}