Add files via upload
This commit is contained in:
File diff suppressed because one or more lines are too long
@@ -13,7 +13,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 20,
|
"execution_count": 6,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
@@ -42,7 +42,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 35,
|
"execution_count": 7,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
@@ -67,7 +67,8 @@
|
|||||||
" vam=round(ft / hr / 3.28084),\n",
|
" vam=round(ft / hr / 3.28084),\n",
|
||||||
" fpm=round(ft / mi),\n",
|
" fpm=round(ft / mi),\n",
|
||||||
" pct=round(ft / mi * 100 / 5280, 2),\n",
|
" pct=round(ft / mi * 100 / 5280, 2),\n",
|
||||||
" kms=round(mi * 1.609, 2))\n",
|
" kms=round(mi * 1.609, 2),\n",
|
||||||
|
" km_up=round(ft * 0.0003048, 1))\n",
|
||||||
"\n",
|
"\n",
|
||||||
"def drop_index(frame) -> pd.DataFrame:\n",
|
"def drop_index(frame) -> pd.DataFrame:\n",
|
||||||
" \"\"\"Drop the index column.\"\"\"\n",
|
" \"\"\"Drop the index column.\"\"\"\n",
|
||||||
@@ -77,30 +78,35 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 39,
|
"execution_count": 65,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"rides = parse_rides(open('bikerides.tsv'))\n",
|
"rides = parse_rides(open('bikerides.tsv'))\n",
|
||||||
"yearly = parse_rides(open('bikeyears.tsv')).drop(columns=['date', 'title'])"
|
"yearly = parse_rides(open('bikeyears.tsv')).drop(columns=['date', 'title'])\n",
|
||||||
|
"daily = yearly.copy()\n",
|
||||||
|
"for name in 'hours miles feet kms km_up'.split():\n",
|
||||||
|
" daily[name] = round(daily[name].map(lambda x: x / 350), 3 if name == 'km_up' else 1)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Reading Data: `segments`\n",
|
"# Reading Data: `segments`, `places`, and `tiles`\n",
|
||||||
"\n",
|
"\n",
|
||||||
"I picked some representative climbing segments ([`bikesegments.csv`](bikesegments.csv)) with the segment length in miles and climb in feet, along with several of my times on the segment. A line like\n",
|
"I picked some representative climbing segments ([`bikesegments.csv`](bikesegments.csv)) with the segment length in miles and climb in feet, along with several of my times on the segment. A line like\n",
|
||||||
"\n",
|
"\n",
|
||||||
" Old La Honda, 2.98, 1255, 28:49, 34:03, 36:44\n",
|
" Old La Honda, 2.98, 1255, 28:49, 34:03, 36:44\n",
|
||||||
" \n",
|
" \n",
|
||||||
"means that this segment of Old La Honda Rd is 2.98 miles long, 1255 feet of climbing, and I've selected three times for my rides on that segment: the fastest, middle, and slowest of the times that Strava shows. (However, I ended up dropping the slowest time in the charts to make them less busy.)"
|
"means that this segment of Old La Honda Rd is 2.98 miles long, 1255 feet of climbing, and I've selected three times for my rides on that segment: the fastest, middle, and slowest of the times that Strava shows. (However, I ended up dropping the slowest time in the charts to make them less busy.)\n",
|
||||||
|
"\n",
|
||||||
|
"I keep track of percentage of roads ridden in various places in `'bikeplaceshort.csv'`, which comes from wandrer.earth."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 23,
|
"execution_count": 41,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
@@ -117,94 +123,55 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 24,
|
"execution_count": 42,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"segments = parse_segments(open('bikesegments.csv'))"
|
"segments = parse_segments(open('bikesegments.csv'))"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"# Reading Data: `places` and `tiles`\n",
|
|
||||||
"\n",
|
|
||||||
"Monthly, I will take my [summary data from wandrer.earth](https://wandrer.earth/athletes/3534/santa-clara-county-california) and enter it in the file [bikeplaces.csv](bikeplaces.csv), in a format where\n",
|
|
||||||
"\n",
|
|
||||||
" San Carlos,99.0,SMC,22.2,26.0,32.9,,37.2,39.0,40.5,,41.4,,,41.7,,,,,,59.5,78.7\n",
|
|
||||||
"means that San Carlos has 99.0 miles of roads, is in San Mateo County (SMC), and in the first month that I had ridden 22.2% of the roads in the first month that I kept track, and 78.7% in the most recent month. In months with no entry, there was no change."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 25,
|
"execution_count": 43,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"places = pd.read_csv('bikeplaces.csv', comment='#')\n",
|
"places = pd.read_table(open('bikeplaceshort.csv'), sep=',', comment='#')"
|
||||||
"months = [m for m in places.columns if '/' in m]\n",
|
|
||||||
"places['maxpct'] = [max(p for p in place[4:] if not pd.isna(p))\n",
|
|
||||||
" for place in places.itertuples()]"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 41,
|
"execution_count": 50,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [
|
"outputs": [],
|
||||||
{
|
|
||||||
"data": {
|
|
||||||
"text/html": [
|
|
||||||
"<div>\n",
|
|
||||||
"<style scoped>\n",
|
|
||||||
" .dataframe tbody tr th:only-of-type {\n",
|
|
||||||
" vertical-align: middle;\n",
|
|
||||||
" }\n",
|
|
||||||
"\n",
|
|
||||||
" .dataframe tbody tr th {\n",
|
|
||||||
" vertical-align: top;\n",
|
|
||||||
" }\n",
|
|
||||||
"\n",
|
|
||||||
" .dataframe thead th {\n",
|
|
||||||
" text-align: right;\n",
|
|
||||||
" }\n",
|
|
||||||
"</style>\n",
|
|
||||||
"<table border=\"1\" class=\"dataframe\">\n",
|
|
||||||
" <thead>\n",
|
|
||||||
" <tr style=\"text-align: right;\">\n",
|
|
||||||
" <th></th>\n",
|
|
||||||
" <th>date</th>\n",
|
|
||||||
" <th>tiles</th>\n",
|
|
||||||
" <th>square</th>\n",
|
|
||||||
" <th>cluster</th>\n",
|
|
||||||
" </tr>\n",
|
|
||||||
" </thead>\n",
|
|
||||||
" <tbody>\n",
|
|
||||||
" <tr>\n",
|
|
||||||
" <th></th>\n",
|
|
||||||
" <td>Sep 2022</td>\n",
|
|
||||||
" <td>2481</td>\n",
|
|
||||||
" <td>11x11</td>\n",
|
|
||||||
" <td>295</td>\n",
|
|
||||||
" </tr>\n",
|
|
||||||
" </tbody>\n",
|
|
||||||
"</table>\n",
|
|
||||||
"</div>"
|
|
||||||
],
|
|
||||||
"text/plain": [
|
|
||||||
" date tiles square cluster\n",
|
|
||||||
" Sep 2022 2481 11x11 295"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"execution_count": 41,
|
|
||||||
"metadata": {},
|
|
||||||
"output_type": "execute_result"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"source": [
|
"source": [
|
||||||
"tiles = drop_index(pd.DataFrame(columns='date tiles square cluster'.split(),\n",
|
"def make_clickable(comment) -> str:\n",
|
||||||
" data=[('Sep 2022', 2481, '11x11', 295)]))"
|
" \"\"\"Make a clickable link for a pandas dataframe.\"\"\"\n",
|
||||||
|
" if '!' not in comment:\n",
|
||||||
|
" return comment\n",
|
||||||
|
" anchor, number = comment.split('!')\n",
|
||||||
|
" return f'<a href=\"https://www.strava.com/activities/{number}\" rel=\"noopener noreferrer\" target=\"_blank\">{anchor}</a>'\n",
|
||||||
|
"\n",
|
||||||
|
"def link_date(date) -> str:\n",
|
||||||
|
" \"\"\"Make the date into a clickable link.\"\"\"\n",
|
||||||
|
" m, d, y = date.split('/')\n",
|
||||||
|
" return f'<a href=\"https://www.statshunters.com/?to={y}{m}{d}\" rel=\"noopener noreferrer\" target=\"_blank\">{date}</a>'"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 57,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"tiles = drop_index(pd.DataFrame(columns='date square cluster total comment'.split(), data=[\n",
|
||||||
|
" ('06/30/2023', 13, 689, 2640, 'Rides in east Bay!9298603815'),\n",
|
||||||
|
" ('04/14/2023', 13, 630, 2595, 'Black Sands Beach connects Marin to max cluster!8891171008'),\n",
|
||||||
|
" ('03/04/2023', 13, 583, 2574, 'Almaden rides connects Gilroy to max cluster!8654437264'),\n",
|
||||||
|
" ('10/22/2022', 13, 396, 2495, 'Alviso levees to get to 13x13 max square!8003921626'),\n",
|
||||||
|
" ('10/16/2022', 12, 393, 2492, 'Milpitas ride connects East Bay to max cluster!7974994605'),\n",
|
||||||
|
" ('09/08/2022', 11, 300, 2487, 'First started tracking tiles')])\n",
|
||||||
|
" ).style.format({'comment': make_clickable, 'date': link_date})"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -216,7 +183,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 27,
|
"execution_count": 14,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
@@ -256,69 +223,35 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Plotting Wandrer Places"
|
"# Wandrer Places "
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 28,
|
"execution_count": 15,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"special_areas = dict(sf='San Francisco Neighborhoods', sj='San Jose Neighborhoods', \n",
|
"def wandering(places=places, by=['pct']):\n",
|
||||||
" far='Far Away Places', county='Bay Area Counties', big='California, USA and Earth')\n",
|
" \"All those who wander are not lost.\" # Also try by=['cat', 'pct']\n",
|
||||||
"\n",
|
" frame = places.sort_values(by=by, ascending=('pct' not in by))\n",
|
||||||
"def wandering(places, pcts=(100, 99, 90, 50, 33.3, 25, 0), special_areas=special_areas):\n",
|
" M = 1_000_000\n",
|
||||||
" \"Plot charts within the various percent ranges, and special groups.\"\n",
|
" for i, (name, miles, county, pct) in frame.iterrows():\n",
|
||||||
" for i in range(len(pcts) - 1):\n",
|
" # Some fiddling to get the format right\n",
|
||||||
" hi, lo = pcts[i], pcts[i + 1]\n",
|
" p = f'{pct:.1f}' if (pct > 0.1) else f'{pct:.3f}'\n",
|
||||||
" inrange = places[(places.maxpct > lo) & (places.maxpct <= hi) & ~places.area.isin(special_areas)]\n",
|
" mymiles = pct / 100 * miles\n",
|
||||||
" wandrer_plot(f'Places with {lo}% to {hi}% roads traveled', inrange)\n",
|
" done = f'{rounded(mymiles)}/{rounded(miles)} mi'\n",
|
||||||
" for area in special_areas:\n",
|
" togo = next((f'{rounded(target / 100 * miles - mymiles):>5} mi for {target}%' \n",
|
||||||
" wandrer_plot(special_areas[area], places[places.area == area])\n",
|
" for target in (0.02, 0.1, 0.2, 1, 2, 25, 50, 90, 99)\n",
|
||||||
|
" if mymiles < target / 100 * miles), '')\n",
|
||||||
|
" print(f'{county} {p:>5}% {name:25} {done:>15} {togo}') \n",
|
||||||
" \n",
|
" \n",
|
||||||
"def wandrer_plot(title, places):\n",
|
|
||||||
" \"\"\"Plot Wandrer.earth data for the places with given abbrevs.\"\"\"\n",
|
|
||||||
" if len(places) == 0:\n",
|
|
||||||
" return # Don't make an empty plot\n",
|
|
||||||
" places = places.sort_values(by='maxpct', ascending=False)\n",
|
|
||||||
" fig, ax = plt.figure(), plt.subplot(111); \n",
|
|
||||||
" plt.plot()\n",
|
|
||||||
" for (_, name, miles, area, *pcts, maxpct), marker in zip(places.itertuples(), markers):\n",
|
|
||||||
" pcts = replace_nans(pcts)\n",
|
|
||||||
" ax.plot(pcts, ':', marker=marker, label=label(pcts, name, miles))\n",
|
|
||||||
" ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), shadow=True,\n",
|
|
||||||
" prop=matplotlib.font_manager.FontProperties(family='monospace'))\n",
|
|
||||||
" plt.xticks(range(len(pcts)), labels=months, rotation=90, fontfamily='monospace')\n",
|
|
||||||
" plt.ylabel('Percent of Roads Ridden')\n",
|
|
||||||
" plt.title(title); plt.tight_layout(); grid(axis='y'); plt.show()\n",
|
|
||||||
" \n",
|
|
||||||
"markers = '^v><osdhxDHPX*' * 3 # Matplotlib markers\n",
|
|
||||||
"bonuses = (0.02, 0.1, 2, 25, 50, 90, 99) # Percents that earn important bonuses\n",
|
|
||||||
"\n",
|
|
||||||
"def label(pcts, name, miles) -> str:\n",
|
|
||||||
" \"\"\"Make a label for the legend.\"\"\"\n",
|
|
||||||
" pct = f'{rounded(pcts[-1]):>3}' if pcts[-1] > 1.4 else f'{pcts[-1]}'\n",
|
|
||||||
" done = miles * pcts[-1]\n",
|
|
||||||
" bonus = next((f' {rounded((p - pcts[-1]) / 100 * miles):>3} to {p}%' \n",
|
|
||||||
" for p in bonuses if p >= pcts[-1]), '')\n",
|
|
||||||
" return f'{pct}% ({rounded(done / 100):>3}/{rounded(miles):<3} mi){bonus} {name}'\n",
|
|
||||||
" \n",
|
|
||||||
"def replace_nans(numbers) -> list:\n",
|
|
||||||
" \"\"\"Replace NaN (not a number) values with the previous actual number.\"\"\"\n",
|
|
||||||
" result = []\n",
|
|
||||||
" prev = 0\n",
|
|
||||||
" for x in numbers:\n",
|
|
||||||
" if x == x:\n",
|
|
||||||
" prev = x\n",
|
|
||||||
" else: # Not a Number \n",
|
|
||||||
" x = prev\n",
|
|
||||||
" result.append(x)\n",
|
|
||||||
" return result \n",
|
|
||||||
"\n",
|
|
||||||
"def rounded(x: float) -> str: \n",
|
"def rounded(x: float) -> str: \n",
|
||||||
" \"\"\"Round x to 3 spaces wide (if possible).\"\"\"\n",
|
" \"\"\"Round x to 3 spaces wide (if possible).\"\"\"\n",
|
||||||
" return f'{round(x):,d}' if x > 10 else f'{x:.1f}'"
|
" return (rounded(x/1e6) + 'M' if x > 1e6\n",
|
||||||
|
" else f'{x/1e6:4.2f}M' if x > 1e5\n",
|
||||||
|
" else f'{round(x):,d}' if x > 10 \n",
|
||||||
|
" else f'{x:.1f}')"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -330,24 +263,24 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 34,
|
"execution_count": 16,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"def make_leaders(data):\n",
|
"def make_leaders(data):\n",
|
||||||
" \"\"\"Make a dataframe of leaders in two counties.\"\"\"\n",
|
" \"\"\"Make a dataframe of leaders in two counties.\"\"\"\n",
|
||||||
" leaders = pd.DataFrame(data, columns=['Name', 'Initials', 'SMC %', 'SCC %', 'Front?'])\n",
|
" leaders = pd.DataFrame(data, columns=['Name', 'Initials', 'SMC %', 'SCC %', 'Front?'])\n",
|
||||||
" leaders['SMC miles'] = [round(29.51 * d[2]) for d in data]\n",
|
" leaders['SMC miles'] = [round(2814 * d[2] / 100) for d in data]\n",
|
||||||
" leaders['SCC miles'] = [round(75.64 * d[3]) for d in data]\n",
|
" leaders['SCC miles'] = [round(7569 * d[3] / 100) for d in data]\n",
|
||||||
" leaders['Total miles'] = leaders['SMC miles'] + leaders['SCC miles']\n",
|
" leaders['Total miles'] = leaders['SMC miles'] + leaders['SCC miles']\n",
|
||||||
" leaders['Total %'] = leaders['SMC %'] + leaders['SCC %']\n",
|
" leaders['Avg %'] = (leaders['SMC %'] + leaders['SCC %']) / 2\n",
|
||||||
" return drop_index(leaders.sort_values('Total %', ascending=False))\n",
|
" return drop_index(leaders.sort_values('Avg %', ascending=False))\n",
|
||||||
"\n",
|
"\n",
|
||||||
"leaders = make_leaders([ # Data as of Sept 8, 2022\n",
|
"leaders = make_leaders([ # Data as of Sept 20, 2023 (Name, Initials, SMC, SCC, Frontier?)\n",
|
||||||
" ('Barry Mann', 'BM', 75.34, 29.32, 1), ('Jason Molenda', 'JM', 7.13, 54.59, 1), \n",
|
" ('Barry Mann', 'BM', 76.97, 30.21, 1), ('Jason Molenda', 'JM', 7.13, 55.39, 1), \n",
|
||||||
" ('Peter Norvig', 'PN', 55.26, 30.31, 1), ('Brian Feinberg', 'BF', 29.72, 35.93, 1),\n",
|
" ('Peter Norvig', 'PN', 61.56, 32.8, 1), ('Brian Feinberg', 'BF', 32.5, 43.68, 1),\n",
|
||||||
" ('Jim Brooks', 'JB', 4.23, 43.53, 0), ('Megan Gardner', 'MG', 92.51, 8.69, 1),\n",
|
" ('Jim Brooks', 'JB', 4.23, 44.36, 0), ('Megan Gardner', 'MG', 97.62, 8.69, 1),\n",
|
||||||
" ('Matthew Ring', 'MR', 75.53, 1.48, 0), ('Elliot Huff', 'EF', 51.78, 8.14, 0)])\n",
|
" ('Matthew Ring', 'MR', 78.85, 1.48, 0), ('Elliot Hoff', 'EF', 52.88, 8.14, 0)])\n",
|
||||||
" \n",
|
" \n",
|
||||||
"def pareto_front(leaders):\n",
|
"def pareto_front(leaders):\n",
|
||||||
" ax = leaders.plot('SMC %', 'SCC %', grid=True, kind='scatter')\n",
|
" ax = leaders.plot('SMC %', 'SCC %', grid=True, kind='scatter')\n",
|
||||||
@@ -367,7 +300,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 30,
|
"execution_count": 17,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
@@ -381,52 +314,22 @@
|
|||||||
" \"\"\"The number of rides needed to reach an Eddington number target.\"\"\"\n",
|
" \"\"\"The number of rides needed to reach an Eddington number target.\"\"\"\n",
|
||||||
" return target - sum(distances >= target)\n",
|
" return target - sum(distances >= target)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"def Ed_gaps(rides, N=10) -> dict:\n",
|
"def Ed_gaps(rides, E_km=100, E_mi=67, N=11) -> dict:\n",
|
||||||
" \"\"\"A table of gaps to Eddington numbers by year, and a plot..\"\"\"\n",
|
" \"\"\"A table of gaps to Eddington numbers by year.\"\"\"\n",
|
||||||
" E_km = Ed_number(rides, 'kms')\n",
|
" data = [(E_km + d, sum(rides.kms >= E_km + d), Ed_gap(rides.kms, E_km + d), \n",
|
||||||
" E_mi = Ed_number(rides, 'miles')\n",
|
" E_mi + d, sum(rides.miles >= E_mi + d), Ed_gap(rides.miles, E_mi + d))\n",
|
||||||
" data = [(E_km + d, Ed_gap(rides.kms, E_km + d), \n",
|
|
||||||
" E_mi + d, Ed_gap(rides.miles, E_mi + d))\n",
|
|
||||||
" for d in range(N)]\n",
|
" for d in range(N)]\n",
|
||||||
" df = pd.DataFrame(data, columns=['kms', 'kms gap', 'miles', 'miles gap'])\n",
|
" df = pd.DataFrame(data, columns=['kms', 'km rides', 'kms gap', \n",
|
||||||
|
" 'miles', 'miles rides', 'miles gap'])\n",
|
||||||
" return drop_index(df)\n",
|
" return drop_index(df)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"def Ed_progress(rides, years=reversed(range(2013, 2022 + 1))) -> pd.DataFrame:\n",
|
"def Ed_progress(rides, years=range(2023, 2013, -1)) -> pd.DataFrame:\n",
|
||||||
" \"\"\"A table of Eddington numbers by year, and a plot.\"\"\"\n",
|
" \"\"\"A table of Eddington numbers by year, and a plot.\"\"\"\n",
|
||||||
" def Ed(year, unit): return Ed_number(rides[rides['year'] <= year], unit)\n",
|
" def Ed(year, unit): return Ed_number(rides[rides['year'] <= year], unit)\n",
|
||||||
" data = [(y, Ed(y, 'kms'), Ed(y, 'miles')) for y in years]\n",
|
" data = [(y, Ed(y, 'kms'), Ed(y, 'miles')) for y in years]\n",
|
||||||
" df = pd.DataFrame(data, columns=['year', 'Ed_km', 'Ed_mi'])\n",
|
" df = pd.DataFrame(data, columns=['year', 'Ed_km', 'Ed_mi'])\n",
|
||||||
" return drop_index(df)"
|
" return drop_index(df)"
|
||||||
]
|
]
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"# Climbing to Space"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": 31,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"per_month_climbing = [35.491, 31.765, 39.186, 33.641, 32.782, 14.809, 46.731, 38.556]\n",
|
|
||||||
"\n",
|
|
||||||
"space = {'100 kms': 328.204, '10 Everests': 290.320, '50 miles': 50 * 5.280}\n",
|
|
||||||
"\n",
|
|
||||||
"def climbing(per_month=per_month_climbing, space=space):\n",
|
|
||||||
" \"\"\"Plot progress in climbing\"\"\"\n",
|
|
||||||
" total = np.cumsum(per_month)\n",
|
|
||||||
" for label in space:\n",
|
|
||||||
" plt.plot(range(12), [space[label]] * 12, ':', label=label)\n",
|
|
||||||
" plt.plot(range(len(total)), total, 'o-', label='my total')\n",
|
|
||||||
" plt.plot(range(len(total)), per_month, 's-.', label='per month')\n",
|
|
||||||
" plt.legend(loc=(1.04, .64), fontsize='large'); grid()\n",
|
|
||||||
" plt.xlabel('Month of 2022'); plt.ylabel('Total climbing (Kft)')\n",
|
|
||||||
" plt.xticks(range(12), 'Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec'.split())"
|
|
||||||
]
|
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
@@ -446,7 +349,8 @@
|
|||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.9.12"
|
"version": "3.9.12"
|
||||||
}
|
},
|
||||||
|
"toc-autonumbering": true
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
"nbformat_minor": 4
|
"nbformat_minor": 4
|
||||||
|
|||||||
602
ipynb/OneLetterOff.ipynb
Normal file
602
ipynb/OneLetterOff.ipynb
Normal file
@@ -0,0 +1,602 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "a6233239-1cf1-4eee-affc-d96a0747ee68",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"<div style=\"text-align: right\" align=\"right\"><i>Peter Norvig, Aug 2023</i></div>\n",
|
||||||
|
"\n",
|
||||||
|
"# One Letter Off: Word Game with Large Language Models\n",
|
||||||
|
"\n",
|
||||||
|
"Professor [**Serge Belongie**](https://en.wikipedia.org/wiki/Serge_Belongie) invented a word game: \n",
|
||||||
|
"- *Pick a word and drop the last letter to form a second word.*\n",
|
||||||
|
"- *Come up with a crossword-puzzle-style clue to help someone guess the resulting two-word phrase.*\n",
|
||||||
|
"\n",
|
||||||
|
"I'm calling the game **One Letter Off**, because that's one of the names that [Bard](https://bard.google.com) suggested when I asked. I'll also introduce a variant of the game where *any* letter in the word can be dropped, not just the last one.\n",
|
||||||
|
"\n",
|
||||||
|
"As an example, pick *board*, drop the last letter to get the phrase *boar board* and write the clue *pig plank.* In the variant, you could drop the letter *o* to get *board bard* and write the clue *pine poet.* (Note the pair of words can be in either order.)\n",
|
||||||
|
"\n",
|
||||||
|
"I thought it would be interesting to write a program to generate interesting word pairs and create clues. My plan is:\n",
|
||||||
|
"\n",
|
||||||
|
"1. Obtain a dictionary of words.\n",
|
||||||
|
"2. Write code to generate all one-letter-off word pairs and sort them so the best ones come first.\n",
|
||||||
|
"3. Inspect some of the pairs and manually write some clues for them, just to get a feel for the task.\n",
|
||||||
|
"4. Write code to prompt a large language model and see what clues it can produce.\n",
|
||||||
|
"\n",
|
||||||
|
"# 1. A dictionary of words, with word vectors\n",
|
||||||
|
"\n",
|
||||||
|
"Given a dictionary, it is easy to find all pairs of words that are one letter off. The tricky part is to decide which are the \"good\" ones. Clearly, pairing the word *race* with *races* or *raced* or *racer* is bad, because they are all just forms of the same root word. The game is interesting when two words are very different in meaning, even though they are only one letter apart. Pairing *quaker* with *quake* is ok, because a Quaker is not just someone who quakes. So I can't rely on simple heuristic rules like \"don't drop a final -r.\" \n",
|
||||||
|
"\n",
|
||||||
|
"I can, however, associate each word with a [**word vector**](https://en.wikipedia.org/wiki/Word_embedding) (also called *word embedding*) and define the goodness of a pair as the distance between the pairs. Multiple research teams have published open-source dictionaries that map words to word vectors. Two words that are similar in meaning or usage have similar word vectors (and so the distance between them is small), but words with distinct usage should have dissimilar word vectors (and a larger distance between them). \n",
|
||||||
|
"\n",
|
||||||
|
"I downloaded a [file of word vectors](http://vectors.nlpl.eu/repository/20/0.zip) from the [NLPL Word Vectors Repository](http://vectors.nlpl.eu/repository/) and truncated it to consider only the [30,000 most common words](model30k.txt), because I didn't want to be giving clues for words that the guesser is unlikely to know. Each line in the word vector file has a word, followed by its part of speech (which we will ignore), followed by a vector of 300 numbers. For example:\n",
|
||||||
|
"\n",
|
||||||
|
" say_VERB -0.008861 0.097097 0.100236 0.070044 -0.079279 0.000923 ...\n",
|
||||||
|
" \n",
|
||||||
|
"I'll read the file into a dict of `{word: vector}` called `vectors`, but I will eliminate words that are too short, or have a non-alphabetic or uppercase character. Also, if a word appears twice with two different parts of speech, I'll use the first one (because the file is sorted with more frequent words first)."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 1,
|
||||||
|
"id": "bfedfb19",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from typing import *\n",
|
||||||
|
"import numpy as np\n",
|
||||||
|
"\n",
|
||||||
|
"def read_vectors(lines: Iterable[str]) -> Dict[str, np.array]:\n",
|
||||||
|
" \"\"\"Read lines of text into a dict of {word: vector} pairs.\"\"\"\n",
|
||||||
|
" vectors = {}\n",
|
||||||
|
" for line in lines:\n",
|
||||||
|
" entry, *numbers = line.split()\n",
|
||||||
|
" word, POS = entry.split('_') # Ignore the part of speech\n",
|
||||||
|
" if word not in vectors and len(word) >= 4 and word.isalpha() and word.islower():\n",
|
||||||
|
" vectors[word] = np.array([float(x) for x in numbers])\n",
|
||||||
|
" return vectors\n",
|
||||||
|
"\n",
|
||||||
|
"vectors = read_vectors(open('model30k.txt'))"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 2,
|
||||||
|
"id": "efdc80fe",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"18331"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 2,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"len(vectors)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "8ad74e30",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"There are 18,331 distinct entries, out of the 30,000 lines in the original file.\n",
|
||||||
|
"\n",
|
||||||
|
"The [Euclidean distance](https://en.wikipedia.org/wiki/Euclidean_distance) between two word vectors is the [norm](https://en.wikipedia.org/wiki/Norm_(mathematics)) of their difference:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 3,
|
||||||
|
"id": "d02a7b33",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"def distance(word1, word2, vectors) -> float:\n",
|
||||||
|
" \"\"\"Distance between vectors for the two words.\"\"\"\n",
|
||||||
|
" return np.linalg.norm(vectors[word1] - vectors[word2])"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 4,
|
||||||
|
"id": "02fa9df6",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"1.3230686663760125"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 4,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"distance('quaker', 'quake', vectors)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 5,
|
||||||
|
"id": "d40b7470",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"0.7579947920361987"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 5,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"distance('smoker', 'smoke', vectors)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "f2c125d7",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# 2. Word pairs, best first\n",
|
||||||
|
"\n",
|
||||||
|
"The next step is to form word pairs by considering each word, looking at all ways to drop a letter from each word, and checking if those are words. Then we'll sort them so the \"best\" pairs come first."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 6,
|
||||||
|
"id": "e2061a00",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"def sorted_pairs(vectors, drop_fn: callable) -> List[Tuple[str, str]]:\n",
|
||||||
|
" \"\"\"List of (word1, word2) pairs, biggest word-vector distance first.\"\"\"\n",
|
||||||
|
" pairs = [(w1, w2) for w1 in vectors for w2 in drop_fn(w1, vectors)]\n",
|
||||||
|
" pairs.sort(key=lambda pair: distance(*pair, vectors), reverse=True)\n",
|
||||||
|
" return pairs"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "6c65db76",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Here are the two functions to drop letters (either just the last letter, or any letter) and the function `words`, which checks which of the resulting strings are in fact words."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 7,
|
||||||
|
"id": "227cbcc7",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"def drop_last_letter(word, vectors) -> Set[str]: \n",
|
||||||
|
" \"\"\"All ways to drop the last letter from word to form a word in `vectors`.\"\"\"\n",
|
||||||
|
" return words({word[:-1]}, vectors)\n",
|
||||||
|
"\n",
|
||||||
|
"def drop_any_letter(word, vectors) -> Set[str]: \n",
|
||||||
|
" \"\"\"All ways to drop one letter from word to form a word in `vectors`.\"\"\"\n",
|
||||||
|
" return words({word[:i] + word[i + 1:] for i in range(len(word))}, vectors)\n",
|
||||||
|
"\n",
|
||||||
|
"def words(candidates, vectors) -> Set[str]:\n",
|
||||||
|
" \"\"\"All candidate strings that are words in the `vectors` dict.\"\"\"\n",
|
||||||
|
" return {w for w in candidates if w in vectors}"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 8,
|
||||||
|
"id": "6d82b297",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"{'clam'}"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 8,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"drop_last_letter('clamp', vectors)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 9,
|
||||||
|
"id": "1e4c8df5",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"{'camp', 'clam', 'clap', 'lamp'}"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 9,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"drop_any_letter('clamp', vectors)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "5f9cfe4c",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"We're ready to generate a sorted list of word pairs:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 10,
|
||||||
|
"id": "0b3e1834",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"1129"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 10,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"pairs = sorted_pairs(vectors, drop_last_letter)\n",
|
||||||
|
"len(pairs)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "21903917",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"There are 1,129 pairs, but I'll just look at the first 50:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 11,
|
||||||
|
"id": "c55e51a6",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"[('seedy', 'seed'),\n",
|
||||||
|
" ('depth', 'dept'),\n",
|
||||||
|
" ('hindu', 'hind'),\n",
|
||||||
|
" ('sloth', 'slot'),\n",
|
||||||
|
" ('plumb', 'plum'),\n",
|
||||||
|
" ('tense', 'tens'),\n",
|
||||||
|
" ('reverb', 'rever'),\n",
|
||||||
|
" ('irish', 'iris'),\n",
|
||||||
|
" ('siren', 'sire'),\n",
|
||||||
|
" ('trusty', 'trust'),\n",
|
||||||
|
" ('meter', 'mete'),\n",
|
||||||
|
" ('pleat', 'plea'),\n",
|
||||||
|
" ('sinew', 'sine'),\n",
|
||||||
|
" ('chancel', 'chance'),\n",
|
||||||
|
" ('heath', 'heat'),\n",
|
||||||
|
" ('aspiring', 'aspirin'),\n",
|
||||||
|
" ('forth', 'fort'),\n",
|
||||||
|
" ('combo', 'comb'),\n",
|
||||||
|
" ('drama', 'dram'),\n",
|
||||||
|
" ('paste', 'past'),\n",
|
||||||
|
" ('outwith', 'outwit'),\n",
|
||||||
|
" ('filly', 'fill'),\n",
|
||||||
|
" ('board', 'boar'),\n",
|
||||||
|
" ('livery', 'liver'),\n",
|
||||||
|
" ('forcep', 'force'),\n",
|
||||||
|
" ('stocky', 'stock'),\n",
|
||||||
|
" ('corporal', 'corpora'),\n",
|
||||||
|
" ('photon', 'photo'),\n",
|
||||||
|
" ('forte', 'fort'),\n",
|
||||||
|
" ('rabbit', 'rabbi'),\n",
|
||||||
|
" ('median', 'media'),\n",
|
||||||
|
" ('cello', 'cell'),\n",
|
||||||
|
" ('chart', 'char'),\n",
|
||||||
|
" ('spark', 'spar'),\n",
|
||||||
|
" ('liver', 'live'),\n",
|
||||||
|
" ('caster', 'caste'),\n",
|
||||||
|
" ('irony', 'iron'),\n",
|
||||||
|
" ('insider', 'inside'),\n",
|
||||||
|
" ('heron', 'hero'),\n",
|
||||||
|
" ('macho', 'mach'),\n",
|
||||||
|
" ('heroine', 'heroin'),\n",
|
||||||
|
" ('polyp', 'poly'),\n",
|
||||||
|
" ('gravely', 'gravel'),\n",
|
||||||
|
" ('primer', 'prime'),\n",
|
||||||
|
" ('honey', 'hone'),\n",
|
||||||
|
" ('quaker', 'quake'),\n",
|
||||||
|
" ('tablet', 'table'),\n",
|
||||||
|
" ('grant', 'gran'),\n",
|
||||||
|
" ('prime', 'prim'),\n",
|
||||||
|
" ('valet', 'vale')]"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 11,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"pairs[:50]"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "3e87f893",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Now I'll consider the variant where we can drop any letter:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 12,
|
||||||
|
"id": "09dfc2fa",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"[('mitre', 'mite'),\n",
|
||||||
|
" ('seedy', 'seed'),\n",
|
||||||
|
" ('posit', 'post'),\n",
|
||||||
|
" ('resign', 'resin'),\n",
|
||||||
|
" ('insect', 'inset'),\n",
|
||||||
|
" ('score', 'core'),\n",
|
||||||
|
" ('parse', 'arse'),\n",
|
||||||
|
" ('depth', 'dept'),\n",
|
||||||
|
" ('convert', 'covert'),\n",
|
||||||
|
" ('thank', 'tank'),\n",
|
||||||
|
" ('hindu', 'hind'),\n",
|
||||||
|
" ('orally', 'rally'),\n",
|
||||||
|
" ('stigma', 'sigma'),\n",
|
||||||
|
" ('naive', 'nave'),\n",
|
||||||
|
" ('harmful', 'armful'),\n",
|
||||||
|
" ('sloth', 'slot'),\n",
|
||||||
|
" ('canyon', 'canon'),\n",
|
||||||
|
" ('bassist', 'assist'),\n",
|
||||||
|
" ('preach', 'peach'),\n",
|
||||||
|
" ('launder', 'lander'),\n",
|
||||||
|
" ('congenital', 'congenial'),\n",
|
||||||
|
" ('supper', 'upper'),\n",
|
||||||
|
" ('duress', 'dress'),\n",
|
||||||
|
" ('usher', 'user'),\n",
|
||||||
|
" ('crate', 'rate'),\n",
|
||||||
|
" ('platitude', 'latitude'),\n",
|
||||||
|
" ('ironic', 'ionic'),\n",
|
||||||
|
" ('sever', 'seer'),\n",
|
||||||
|
" ('quilt', 'quit'),\n",
|
||||||
|
" ('timer', 'tier'),\n",
|
||||||
|
" ('yeast', 'east'),\n",
|
||||||
|
" ('sturdy', 'study'),\n",
|
||||||
|
" ('tamper', 'taper'),\n",
|
||||||
|
" ('crater', 'cater'),\n",
|
||||||
|
" ('frame', 'fame'),\n",
|
||||||
|
" ('broach', 'roach'),\n",
|
||||||
|
" ('stage', 'sage'),\n",
|
||||||
|
" ('leaver', 'lever'),\n",
|
||||||
|
" ('restate', 'estate'),\n",
|
||||||
|
" ('sluice', 'slice'),\n",
|
||||||
|
" ('blinding', 'binding'),\n",
|
||||||
|
" ('gamble', 'gable'),\n",
|
||||||
|
" ('strait', 'trait'),\n",
|
||||||
|
" ('pledge', 'ledge'),\n",
|
||||||
|
" ('clean', 'clan'),\n",
|
||||||
|
" ('shite', 'site'),\n",
|
||||||
|
" ('sched', 'shed'),\n",
|
||||||
|
" ('holist', 'hoist'),\n",
|
||||||
|
" ('plumb', 'plum'),\n",
|
||||||
|
" ('waive', 'wave')]"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 12,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"sorted_pairs(vectors, drop_any_letter)[:50]"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "44a30715",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Both lists provide pretty promising pairs! \n",
|
||||||
|
"\n",
|
||||||
|
"But did I really accomplish the goal of eliminating bad pairs? I think so! The pairs at the end of the list are exactly the kind of thing I wanted to get rid of:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 13,
|
||||||
|
"id": "56cf7c96",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"[('northwards', 'northward'),\n",
|
||||||
|
" ('decentralised', 'decentralise'),\n",
|
||||||
|
" ('insured', 'insure'),\n",
|
||||||
|
" ('thanks', 'thank'),\n",
|
||||||
|
" ('surpluse', 'surplus'),\n",
|
||||||
|
" ('eastwards', 'eastward'),\n",
|
||||||
|
" ('devoted', 'devote'),\n",
|
||||||
|
" ('opposed', 'oppose'),\n",
|
||||||
|
" ('alleged', 'allege'),\n",
|
||||||
|
" ('waken', 'wake'),\n",
|
||||||
|
" ('diall', 'dial'),\n",
|
||||||
|
" ('increased', 'increase'),\n",
|
||||||
|
" ('randomised', 'randomise'),\n",
|
||||||
|
" ('brewery', 'brewer'),\n",
|
||||||
|
" ('involved', 'involve'),\n",
|
||||||
|
" ('vaginal', 'vagina'),\n",
|
||||||
|
" ('larval', 'larva'),\n",
|
||||||
|
" ('fabliaux', 'fabliau'),\n",
|
||||||
|
" ('towards', 'toward'),\n",
|
||||||
|
" ('mucosal', 'mucosa')]"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 13,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"pairs[-20:]"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "5fa3c30e",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# 3 Manually writing clues\n",
|
||||||
|
"\n",
|
||||||
|
"I can take some of these suggested pairs, and make up clues on my own:\n",
|
||||||
|
"\n",
|
||||||
|
"|pair|**clue**|\n",
|
||||||
|
"|----|----|\n",
|
||||||
|
"|alley ally|**Bowling teammate**|\n",
|
||||||
|
"|binding blinding|**Mandatory punishment for the cyclops Polyphemus**|\n",
|
||||||
|
"|class lass|**Sophisticated young lady**|\n",
|
||||||
|
"|finnish finish|**Scandinavian ending**|\n",
|
||||||
|
"|harmful armful|**Dangerous bundle**|\n",
|
||||||
|
"|latitude platitude|**Parallel cliche**|\n",
|
||||||
|
"|plum plumb|**Most excellent toilet installation job**|"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "d502d513",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# 3. Prompting an LLM to write clues\n",
|
||||||
|
"\n",
|
||||||
|
"I wrote some code to submit pairs of words to a large language model (LLM) and get back a clue. The basic approach looks like this:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 14,
|
||||||
|
"id": "36a92c46",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from requests import post\n",
|
||||||
|
"\n",
|
||||||
|
"def get_a_clue(pair, url, api_key) -> str:\n",
|
||||||
|
" \"\"\"Ask a LLM to generate a clue for the pair of words.\"\"\"\n",
|
||||||
|
" phrase = ' '.join(pair)\n",
|
||||||
|
" return ask(f'Write a clever crossword puzzle clue for the phrase \"{phrase}\"', url, api_key)\n",
|
||||||
|
"\n",
|
||||||
|
"def ask(query, url, api_key) -> str:\n",
|
||||||
|
" \"\"\"Ask PaLM model to respond to the query.\"\"\"\n",
|
||||||
|
" response = post(url, headers={\"Authorization\": f\"Bearer {api_key}\"}, json={\"query\": query})\n",
|
||||||
|
" response.raise_for_status()\n",
|
||||||
|
" return response.json()[\"queryResult\"][\"fulfillmentText\"]"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "1c4bbb58",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"However, I'm not going to run the above code in this notebook, for three reasons:\n",
|
||||||
|
"1. Every time the code is re-run it produces a different response, and some runs are not very good.\n",
|
||||||
|
"2. For security, I can't show the API key in this notebook, so you would need to get your own API key.\n",
|
||||||
|
"3. There are many LLMs and I don't want to imply that this is the only one to use.\n",
|
||||||
|
"\n",
|
||||||
|
"Instead I'll show in the table below some of the previously-generated responses, edited for brevity, and selecting mostly good responses, with a few bad ones to illustrate specific points. The model gets a lot of things right:\n",
|
||||||
|
"- Some of the clues are really good!\n",
|
||||||
|
"- The model mostly understands how clues work: short, not too hard, not too easy.\n",
|
||||||
|
"- The model knows that word play or puns are desireable.\n",
|
||||||
|
"- The model gives an explanation of the clue as well as the clue itself.\n",
|
||||||
|
"- Twice the model used a question mark, a crossword puzzle convention to signal that the clue is not straightforward.\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"The model also gets some things wrong:\n",
|
||||||
|
"- Some of the clues are really bad!\n",
|
||||||
|
"- Many clues break a cardinal rule by mentioning one of the answer words (see \"board boar\", etc).\n",
|
||||||
|
"- The model often claims that a clue is \"a play on\" two meanings of a word (so far so good), but then gets the words or meanings confused (see \"frame fame\", \"pleat plea\", \"stocky stock\").\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"|pair|drop|clue|explanation|\n",
|
||||||
|
"| :---- | :---- | :---- | :----------- |\n",
|
||||||
|
"|aspiring aspirin|any|**Wannabe pain reliever**|The clue suggests that an \"aspiring aspirin\" is a something that wants to be a pain reliever, but is not yet one. |\n",
|
||||||
|
"|blinding binding|any|**A dazzling cover-up**|The clue suggests that a \"blinding binding\" is a very bright or dazzling cover-up, such as the cover of a book with a shiny metallic finish or a book with a cover made of sequins.|\n",
|
||||||
|
"|board boar|last|**A pig that's a member of the board of directors**|This clue is clever because it is a pun on the words \"board\" and \"boar\". It also plays on the stereotype of pigs being greedy and wanting to be in positions of power.|\n",
|
||||||
|
"|cello cell|last|**Musical prison?**|This clue is clever because it is a pun on the word \"cell.\" A \"cell\" can refer to a room in a prison, or it can refer to the part of a cello that contains the strings. |\n",
|
||||||
|
"|covert convert|any|**Agent of change**|This clue is clever because it is a play on words. \"Agent of change\" can refer to someone who is trying to make a difference in the world, but it can also refer to a spy or undercover agent.|\n",
|
||||||
|
"|crater cater|any|**A space rock that serves food**|The word \"crater\" refers to a large bowl-shaped depression in the ground, often formed by the impact of a meteorite or asteroid.|\n",
|
||||||
|
"|finnish finish|any|**How a Suomi might end a race**|The word \"Suomi\" is the Finnish word for Finland. The clue suggests how a Finnish person might end a race, because it is a play on the words \"Finnish\" and \"finish.\"|\n",
|
||||||
|
"|frame fame|any|**What kind of fame is displayed on a wall?**|This clue is clever because it is a play on words. The word \"fame\" can mean both \"celebrity\" and \"a picture in a frame.\" |\n",
|
||||||
|
"|flying fling|any|**Airy affair**|This clue is a bit of a pun, as it plays on the two meanings of the word \"fling\": a brief and passionate romantic relationship, and the act of throwing something. |\n",
|
||||||
|
"|ledge pledge|any|**A promise to <br>hang on**|The clue suggests that a \"pledge ledge\" is a promise to hang on to something, such as a promise to keep a secret or to repay a debt.|\n",
|
||||||
|
"|pleat plea|last|**Fashionable plea for more folds?**|This clue is clever because it is a pun on the two meanings of the word \"pleat\": (1) A fold in a piece of fabric (2) A request or appeal|\n",
|
||||||
|
"|plum plumb|last|**Perfectly perpendicular**|The clue suggests that a \"plum plumb\" is a line that is perfectly perpendicular to the ground, like a plumb line.|\n",
|
||||||
|
"|quaker quake|any|**A religious tremor**|The clue suggests that a \"quaker quake\" is a type of religious tremor, which could be interpreted in a number of ways. For example, it could refer to a powerful spiritual experience, or to a time of social and political upheaval.|\n",
|
||||||
|
"|quilt quit|any|**Sewing machine strike**|The clue suggests that a \"quilt quit\" is a strike by sewing machines, which would prevent people from making quilts.|\n",
|
||||||
|
"|rabbit rabbi|last|**What kind of religious leader is all ears**|This clue is clever because it plays on the double meaning of the word \"ears.\" On the one hand, rabbits are known for having large ears. On the other hand, \"all ears\" is an idiom that means to be very attentive and eager to listen.|\n",
|
||||||
|
"|rally orally|any|**Shout for support**|The answer is \"rally orally\", which is a phrase that could be used to describe someone shouting for support.|\n",
|
||||||
|
"|seedy seed|last|**The kernel of a shady deal**|The clue suggests that a \"seedy seed\" is the beginning of something shady or immoral. This could refer to a criminal activity, a broken promise, or even a relationship that is doomed.|\n",
|
||||||
|
"|stage sage|any|**wise actor**|The clue suggests that a \"stage sage\" is a wise actor who is at the top of their game.|\n",
|
||||||
|
"|stocky stock|last|**Beefy broth**|This clue is a play on the two meanings of the word \"stock\": (1) A type of broth made from meat or bones. (2) A large quantity of something, especially goods or money.|\n",
|
||||||
|
"|timer tier|any|**A level of timekeeping**|The clue suggests that a \"timer tier\" is a level of timekeeping, such as the different tiers of accuracy that are available for timers. For example, a timer used for cooking may have a lower tier of accuracy than a timer used for scientific research.|\n",
|
||||||
|
"|yeast east|any|**Where the dough rises**|The clue suggests that \"yeast east\" is the direction in which the sun rises because yeast causes dough to rise.|\n",
|
||||||
|
"\n",
|
||||||
|
"What do you think? Are these clues any good? Can you do better, either by manually writing your own, or by writing a better program (e.g. by finding better word pairs, by constructing a better prompt, or by using a different LLM that performs better on this task)?"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3 (ipykernel)",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.9.12"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 5
|
||||||
|
}
|
||||||
165
ipynb/bikeplaceshort.csv
Normal file
165
ipynb/bikeplaceshort.csv
Normal file
@@ -0,0 +1,165 @@
|
|||||||
|
name,miles,county,pct
|
||||||
|
#,,,
|
||||||
|
# San Mateo County,,SMC,
|
||||||
|
#,,,
|
||||||
|
Atherton,56.3,SMC,99.8
|
||||||
|
Bay Area Ridge Trail,395.6,SMC,21.4
|
||||||
|
Belmont,98.1,SMC,54.2
|
||||||
|
Brisbane,40.9,SMC,46.7
|
||||||
|
Broadmoor,8.8,SMC,38.2
|
||||||
|
Burlingame Hills,6,SMC,71.4
|
||||||
|
Burlingame,88.4,SMC,54.3
|
||||||
|
Burleigh Murray Park,2.1,SMC,91.6
|
||||||
|
Coal Creek Preserve,3.9,SMC,66.7
|
||||||
|
Colma,13.7,SMC,38.3
|
||||||
|
Daly City,148.1,SMC,27.2
|
||||||
|
East Palo Alto,48.3,SMC,99.6
|
||||||
|
El Granada,49.2,SMC,38.4
|
||||||
|
Emerald Lake Hills,24.6,SMC,99.8
|
||||||
|
Foster City,150,SMC,80.7
|
||||||
|
Half Moon Bay,68,SMC,32.2
|
||||||
|
Half Moon Bay State Beach,4.4,SMC,51.5
|
||||||
|
Hillsborough,85.3,SMC,52.3
|
||||||
|
Kensington Square,0.6,SMC,99.9
|
||||||
|
Ladera,8.1,SMC,99.9
|
||||||
|
Long Ridge Preserve,11.0,SMC,51.2
|
||||||
|
Los Trancos OSP,0.3,SMC,100
|
||||||
|
Los Trancos Woods,5.3,SMC,99.9
|
||||||
|
Menlo Oaks,3.5,SMC,99.9
|
||||||
|
Menlo Park,139.5,SMC,99.5
|
||||||
|
Millbrae,65,SMC,40.8
|
||||||
|
Montara,27.8,SMC,32.7
|
||||||
|
Moss Beach,19.7,SMC,44.3
|
||||||
|
North Fair Oaks,26.7,SMC,99.8
|
||||||
|
Pacifica,150.9,SMC,32.7
|
||||||
|
Palomar Park,4,SMC,99.9
|
||||||
|
Portola Valley,48.2,SMC,99.5
|
||||||
|
Purisima Creek Preserve,16.5,SMC,38.7
|
||||||
|
Redwood City,240.5,SMC,99.3
|
||||||
|
Russian Ridge Preserve,12.2,SMC,51.2
|
||||||
|
San Bruno,114,SMC,29.5
|
||||||
|
San Carlos,99,SMC,99.0
|
||||||
|
San Mateo,256,SMC,51.8
|
||||||
|
San Mateo Highlands,18,SMC,77.9
|
||||||
|
Sequoia Tract,11,SMC,99.7
|
||||||
|
Sky Londa,11.8,SMC,99.4
|
||||||
|
Skyline Ridge OSP,0.8,SMC,74.9
|
||||||
|
South San Francisco,185.3,SMC,27.2
|
||||||
|
West Menlo Park,11.2,SMC,99.8
|
||||||
|
Windy Hill Preserve,4.1,SMC,99.1
|
||||||
|
Woodside,75.2,SMC,99.5
|
||||||
|
#,,,
|
||||||
|
# Santa Clara County,,SCC,
|
||||||
|
#,,,
|
||||||
|
Branham,44,SCC,32.7
|
||||||
|
Campbell,119,SCC,28.8
|
||||||
|
Communications Hill,27.8,SCC,39.5
|
||||||
|
Cupertino,172,SCC,33.2
|
||||||
|
Edenvale,30,SCC,47
|
||||||
|
Foothills Preserve,1.1,SCC,86.8
|
||||||
|
Gardner,23.4,SCC,47.2
|
||||||
|
Gilroy,188.9,SCC,12.1
|
||||||
|
Los Altos,138.2,SCC,99.6
|
||||||
|
Los Altos Hills,91.3,SCC,99.4
|
||||||
|
Los Gatos,148,SCC,33.2
|
||||||
|
Loyola,18.3,SCC,99.6
|
||||||
|
Milpitas,224,SCC,37.3
|
||||||
|
Monte Sereno,20.4,SCC,44.7
|
||||||
|
Mountain View,208.1,SCC,99.2
|
||||||
|
Palo Alto,297.2,SCC,99.0
|
||||||
|
Parkview,42.5,SCC,33.4
|
||||||
|
San Francisco Bay Trail,260.8,SCC,62.7
|
||||||
|
San Jose,2618.7,SCC,27.7
|
||||||
|
Santa Clara,348,SCC,34
|
||||||
|
San Martin,35.3,SCC,31.1
|
||||||
|
Saratoga,180,SCC,29.7
|
||||||
|
Seven Trees,40.9,SCC,33.6
|
||||||
|
Spartan Keyes,64.3,SCC,35.7
|
||||||
|
Sunnyvale,357,SCC,52.4
|
||||||
|
Willow Glen,81.6,SCC,34.9
|
||||||
|
Willow Glen South,63.3,SCC,30.5
|
||||||
|
#,,,
|
||||||
|
# Alameda County,,,
|
||||||
|
#,,,
|
||||||
|
Alameda,206.7,ALA,12.2
|
||||||
|
Albany,42.7,ALA,6.8
|
||||||
|
Ashland,35.1,ALA,35.7
|
||||||
|
Berkeley,260.3,ALA,7.4
|
||||||
|
Cherryland,20.9,ALA,27.1
|
||||||
|
Emeryville,28.1,ALA,8
|
||||||
|
Fremont,780.2,ALA,32.2
|
||||||
|
Hayward,444.5,ALA,30.0
|
||||||
|
Hayward Acres,3.5,ALA,43.3
|
||||||
|
Newark,147,ALA,45.9
|
||||||
|
San Leandro,230.6,ALA,27.5
|
||||||
|
San Lorenzo,55.5,ALA,40.8
|
||||||
|
Union City,208.8,ALA,30.7
|
||||||
|
#,,,
|
||||||
|
# SF County,,SFC,
|
||||||
|
#,,,
|
||||||
|
Aquatic Park Fort Mason,6.4,SFC,15.4
|
||||||
|
Ashbury Heights,3.7,SFC,13
|
||||||
|
Balboa Terrace,3.4,SFC,18.2
|
||||||
|
Central Waterfront,10.2,SFC,6
|
||||||
|
Clarendon Heights,6,SFC,14.2
|
||||||
|
Cole Valley,1.7,SFC,18
|
||||||
|
Cow Hollow,12,SFC,11.9
|
||||||
|
Dogpatch,5.1,SFC,12.3
|
||||||
|
Financial District,9.4,SFC,10.2
|
||||||
|
Fisherman's Wharf,6.2,SFC,13.8
|
||||||
|
Forest Hill,6.1,SFC,15.9
|
||||||
|
Golden Gate Heights,17.8,SFC,10.7
|
||||||
|
Golden Gate Park,40.8,SFC,29.4
|
||||||
|
Lake Street,3.9,SFC,36.8
|
||||||
|
Lincoln Park,4.5,SFC,39.6
|
||||||
|
Little Hollywood,3.7,SFC,15.2
|
||||||
|
Mission Bay,13.8,SFC,8.6
|
||||||
|
Northern Waterfront,5.6,SFC,15.5
|
||||||
|
Pacific Heights,18,SFC,10.7
|
||||||
|
Panhandle,7.3,SFC,20.6
|
||||||
|
Polk Gulch,4,SFC,18.2
|
||||||
|
Presidio Heights,6.5,SFC,21.6
|
||||||
|
Presidio National Park,43.5,SFC,26.7
|
||||||
|
Presidio Terrace,2.8,SFC,43.9
|
||||||
|
Seacliff,4.1,SFC,29.3
|
||||||
|
South Beach,4.8,SFC,37.4
|
||||||
|
Sutro Heights,7.1,SFC,13.2
|
||||||
|
#,,,
|
||||||
|
# Far Away Places,,,
|
||||||
|
#,,,
|
||||||
|
Barangaroo,1.7,NSW,47.3
|
||||||
|
Bodega Bay,28.9,SON,17
|
||||||
|
Cambridge,180.8,MAS,6.2
|
||||||
|
Castle Rock State Park,11.2,SCC,51.2
|
||||||
|
Corte Madera,51,MAR,12.9
|
||||||
|
Dawes Point,1.8,NSW,29.2
|
||||||
|
Forest of Nisene Marks SP,44.0,SCC,30.7
|
||||||
|
Guerneville,22.7,SON,23.6
|
||||||
|
Healdsburg,53.7,SON,17.8
|
||||||
|
Marin Headlands GGNRA,65.7,MAR,31.9
|
||||||
|
Mill Valley,92.2,MAR,9.1
|
||||||
|
Millers Point,3.2,NSW,34.3
|
||||||
|
MIT,9.6,MAS,34.7
|
||||||
|
Mokelumne Hill,14.7,CAL,26.8
|
||||||
|
Mt Tamalpais State Park,31.7,MAR,38.7
|
||||||
|
Muir Beach,4.6,MAR,37.1
|
||||||
|
Rosie Riveter Park,5.5,CCC,29.3
|
||||||
|
San Rafael,260,MAR,3.7
|
||||||
|
Sausalito,32.7,MAR,12.9
|
||||||
|
Stinson Beach,11.2,MAR,32.9
|
||||||
|
#,,,
|
||||||
|
# Counties and Bigger,,,
|
||||||
|
#,,,
|
||||||
|
San Mateo County,2814,---,61.56
|
||||||
|
Santa Clara County,7569,---,32.8
|
||||||
|
Alameda County,5818,---,15.38
|
||||||
|
Marin County,2333,---,10.75
|
||||||
|
San Francisco County,1217,---,7.21
|
||||||
|
Napa County,1609,---,4.85
|
||||||
|
Sonoma County,4895,---,5.12
|
||||||
|
Santa Cruz County,2718,---,7.12
|
||||||
|
Contra Costa County,5945,---,2.05
|
||||||
|
#,,,
|
||||||
|
California,377037,---,1.7108
|
||||||
|
USA,6406754,---,0.10677
|
||||||
|
Earth,41974536,---,0.017159
|
||||||
|
@@ -1,5 +1,25 @@
|
|||||||
date year title hours miles feet
|
date year title hours miles feet
|
||||||
##### 2019-2022: Mostly Eddington rides; most recent first
|
##### 2019-2023: Mostly Eddington rides; most recent first
|
||||||
|
Sat, 10/7 2023 Morning Ride 5:33:41 70.25 3,038
|
||||||
|
Sat, 9/23 2023 Dumbarton / Niles 4:45:06 65.02 1,122
|
||||||
|
Sat, 9/9 2023 Alameda to Treasure Island 4:43:08 62.97 1,378
|
||||||
|
Fri, 9/1 2023 Santa Cruz + Ladera 6:14:38 67.59 4,990
|
||||||
|
Sat, 8/12 2023 Norris Canyon + Ladera 6:53:48 74.97 5,470
|
||||||
|
Fri, 7/28 2023 San Carlos completed 5:24:45 69.49 4,479
|
||||||
|
Wed, 7/19 2023 MTV + RWC 5:22:20 64.14 1,145
|
||||||
|
Wed, 6/28 2023 Los Altos Hills + MTV 5:51:01 62.77 1,687
|
||||||
|
Tue, 6/20 2023 Ed Levin Park with Ted MTB 6:10:19 66.55 4,547
|
||||||
|
Fri, 6/9 2023 Completed Los Altos and 50% Sunnyvale 5:37:26 67.20 1,266
|
||||||
|
Wed, 6/7 2023 Los Altos 7:03:03 81.54 2,110
|
||||||
|
Fri, 6/2 2023 Los Altos Hills 6:47:13 68.33 5,221
|
||||||
|
Tue, 5/16 2023 Los Altos Hills paths 5:56:04 64.03 1,869
|
||||||
|
Mon, 5/1 2023 Eden Landing 5:43:00 64.24 1,425
|
||||||
|
Sun, 4/23 2023 Portola Loop with Juliet 5:09:21 62.43 2,805
|
||||||
|
Sun, 3/26 2023 Redwood City 5:42:04 69.11 1,015
|
||||||
|
Sat, 3/18 2023 Mostly Palo Alto 5:54:50 69.31 2,549
|
||||||
|
Wed, 12/21 2022 Canada / Portola 4:18:54 62.83 2,226
|
||||||
|
Sat, 11/19 2022 Cañada 5:02:48 63.06 3,340
|
||||||
|
Fri, 10/21 2022 Alviso Levees + Seaport 5:06:26 67.14 931
|
||||||
Sat, 9/17 2022 San Gregorio / Tunitas 6:33:44 80.53 6,015
|
Sat, 9/17 2022 San Gregorio / Tunitas 6:33:44 80.53 6,015
|
||||||
Thu, 7/14 2022 Bike Hut Classic 6:36:30 74.16 6,070
|
Thu, 7/14 2022 Bike Hut Classic 6:36:30 74.16 6,070
|
||||||
Sat, 7/9 2022 Purisima Creek Trail 7:20:20 62.93 7,139
|
Sat, 7/9 2022 Purisima Creek Trail 7:20:20 62.93 7,139
|
||||||
|
|||||||
|
Reference in New Issue
Block a user