Add files via upload
This commit is contained in:
parent
2df96d1cd4
commit
55d2d5f3a7
@ -9,8 +9,7 @@
|
||||
"\n",
|
||||
"# Advent of Code 2025: The AI LLM Edition\n",
|
||||
"\n",
|
||||
"*This notebook shows some solutions by Gemini, Claude, and ChatGPT, three AI Large Language Models (LLMs), for the\n",
|
||||
"2025 [**Advent of Code**](https://adventofcode.com/) (AoC) programming puzzles. In order to understand each puzzle, you'll have to look at the problem descriptions at [**Advent of Code**](https://adventofcode.com/2025) for each [**Day**](https://adventofcode.com/2025/day/1), and you can also look at [**my solutions**](Advent2025.ipynb), which I did before turning to the LLMs.*\n",
|
||||
"*This notebook shows some solutions by Gemini, Claude, and ChatGPT, three AI Large Language Models (LLMs), for the 2025 [**Advent of Code**](https://adventofcode.com/) (AoC) programming puzzles. In order to understand each puzzle, you'll have to look at the problem descriptions at [**Advent of Code**](https://adventofcode.com/2025) for each [**Day**](https://adventofcode.com/2025/day/1), and you can also look at [**my solutions**](Advent2025.ipynb), which I did before turning to the LLMs.*\n",
|
||||
"\n",
|
||||
"*All the code in this notebook is written by an LLM (except for the one line where I call the LLM's code for each puzzle). My comments (like this one) are in italics, and my prompts given to the LLMs are in **bold italics**. Sometimes I quote the LLM's responses; those are in* regular roman font.\n",
|
||||
"\n",
|
||||
@ -21,7 +20,7 @@
|
||||
"*Now that the 12 days are finished, here are my conclusions:*\n",
|
||||
"\n",
|
||||
"- *Overall, the LLMs did very well, producing code that gives the correct answer to every puzzle.*\n",
|
||||
"- *The run time were reasonably fast, all under a second, except for 12.1, which took about 3 minutes.*\n",
|
||||
"- *The run time were reasonably fast, all under a second, except for 12.1, which took about 2 minutes.*\n",
|
||||
"- *The three LLMS seemed to be roughly equal in quality.*\n",
|
||||
"- *The LLMs knew the things you would want an experienced engineer to know, and applied them at the right time:*\n",
|
||||
" - *How to see through the story about elves and christmas trees, etc. and getting to the real programming issues*\n",
|
||||
@ -38,7 +37,7 @@
|
||||
" - *advanced data structures such as Union-Find and dancing links*\n",
|
||||
" - *computational geometry algorithms including scantiness, flood fill, and ray-casting*\n",
|
||||
" - *recognizing an integer linear programming problem and knowing how to call a package*\n",
|
||||
" - *depth-first search, and recognizing search properties such as commutativity of actions*\n",
|
||||
" - *depth-first search, meet-in-the-middle search, and recognizing search properties such as commutativity of actions*\n",
|
||||
" - *data classes*\n",
|
||||
" - *sometimes type annotations (but not always)*\n",
|
||||
" - *sometimes good doc strings and comments (but not always, and sometimes too many comments).*\n",
|
||||
@ -89,7 +88,7 @@
|
||||
"source": [
|
||||
"# Day 1: Gemini 3 Pro\n",
|
||||
"\n",
|
||||
"*The [Day 1 **Part 1**](https://adventofcode.com/2025/day/1) puzzle is about turning the dial on a safe and counting how many times the pointer ends up at 0.*\n",
|
||||
"*The [**Day 1 Part 1**](https://adventofcode.com/2025/day/1) puzzle is about turning the dial on a safe and counting how many times the pointer ends up at 0.*\n",
|
||||
"\n",
|
||||
"*I started with the Gemini 3 Pro Fast model, which produced this code:*"
|
||||
]
|
||||
@ -175,7 +174,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 1.1: .0007 seconds, answer 1182 correct"
|
||||
"Puzzle 1.1: .0007 seconds, correct answer: 1182 "
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
@ -305,7 +304,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 1.2: .0008 seconds, answer 7509 WRONG; EXPECTED ANSWER IS 6907"
|
||||
"Puzzle 1.2: .0009 seconds, WRONG!! answer: 7509 EXPECTED: 6907"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
@ -409,7 +408,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 1.2: .0008 seconds, answer 6907 correct"
|
||||
"Puzzle 1.2: .0008 seconds, correct answer: 6907 "
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
@ -427,9 +426,9 @@
|
||||
"id": "82fb1dca-1619-4ad7-9155-52fb4804470e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# [Day 2](https://adventofcode.com/2025/day/2) Claude Opus 4.5\n",
|
||||
"# Day 2: Claude Opus 4.5\n",
|
||||
"\n",
|
||||
"*I gave Claude the instructions for **Part 1** and it wrote some code that produces the correct answer but prints a lot of unneccessary debugging output along the way. I prompted it to \"**Change the code to not print anything, just return the answer**\" and got the following:*"
|
||||
"*For [**Day 2 Part 1**](https://adventofcode.com/2025/day/2) Claude wrote code that produces the correct answer but prints a lot of unneccessary debugging output along the way. I prompted it to \"**Change the code to not print anything, just return the answer**\" and got this:*"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -489,7 +488,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 2.1: .0355 seconds, answer 23560874270 correct"
|
||||
"Puzzle 2.1: .0383 seconds, correct answer: 23560874270 "
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
@ -573,7 +572,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 2.2: .0403 seconds, answer 44143124633 correct"
|
||||
"Puzzle 2.2: .0383 seconds, correct answer: 44143124633 "
|
||||
]
|
||||
},
|
||||
"execution_count": 11,
|
||||
@ -591,9 +590,9 @@
|
||||
"id": "d3533d6a-d12f-4dbf-b0e8-9d878c9bc283",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# [Day 3](https://adventofcode.com/2025/day/3) ChatGPT 5.1 Auto\n",
|
||||
"# Day 3: ChatGPT 5.1 Auto\n",
|
||||
"\n",
|
||||
"*The puzzle today is to pick the biggest two-digit number from a string of digits, like \"87\" from \"8675305\". Return the sum over all digit strings.*\n",
|
||||
"*The [**Day 3**](https://adventofcode.com/2025/day/3) puzzle is to pick the biggest two-digit number from a string of digits, like \"89\" from \"8675309\". The task is to compute the sum of biggest two-digit numbers over all the input digit strings.*\n",
|
||||
"\n",
|
||||
"*For **Part 1** ChatGPT gave a very brief analysis of the problem and produced this code (conspicuously lacking comments or doc strings):*"
|
||||
]
|
||||
@ -646,7 +645,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 3.1: .0019 seconds, answer 17085 correct"
|
||||
"Puzzle 3.1: .0020 seconds, correct answer: 17085 "
|
||||
]
|
||||
},
|
||||
"execution_count": 13,
|
||||
@ -731,7 +730,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 3.2: .0026 seconds, answer 169408143086082 correct"
|
||||
"Puzzle 3.2: .0027 seconds, correct answer: 169408143086082"
|
||||
]
|
||||
},
|
||||
"execution_count": 15,
|
||||
@ -749,9 +748,9 @@
|
||||
"id": "00625b83-f56f-4fff-8d87-1e9cdbc02847",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# [Day 4](https://adventofcode.com/2025/day/4): Gemini 3 Pro\n",
|
||||
"# Day 4: Gemini 3 Pro\n",
|
||||
"\n",
|
||||
"*We are given a 2D map and asked how many squares have a \"@\" that is surrounded by fewer than 4 other \"@\" out of 8 neighbors.*\n",
|
||||
"*In [**Day 4**](https://adventofcode.com/2025/day/4) we are given a 2D map and asked how many squares have a \"@\" that is surrounded by fewer than 4 other \"@\" (out of the 8 orthogonal or diagonal neighbors).*\n",
|
||||
"\n",
|
||||
"*Gemini produced a solution to **Part 1** that is straightforward and efficient, although perhaps could use some abstraction (e.g. if they had a function to count neighbors, they wouldn't need the `continue` in the main loop).*"
|
||||
]
|
||||
@ -828,7 +827,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 4.1: .0084 seconds, answer 1569 correct"
|
||||
"Puzzle 4.1: .0088 seconds, correct answer: 1569 "
|
||||
]
|
||||
},
|
||||
"execution_count": 17,
|
||||
@ -923,7 +922,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 4.2: .1996 seconds, answer 9280 correct"
|
||||
"Puzzle 4.2: .2023 seconds, correct answer: 9280 "
|
||||
]
|
||||
},
|
||||
"execution_count": 19,
|
||||
@ -1032,7 +1031,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 4.2: .0329 seconds, answer 9280 correct"
|
||||
"Puzzle 4.2: .0332 seconds, correct answer: 9280 "
|
||||
]
|
||||
},
|
||||
"execution_count": 21,
|
||||
@ -1050,9 +1049,9 @@
|
||||
"id": "78434cfe-d728-453c-8f45-fc6b5fea18c3",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# [Day 5](https://adventofcode.com/2025/day/5): Claude Opus 4.5\n",
|
||||
"# Day 5: Claude Opus 4.5\n",
|
||||
"\n",
|
||||
"*We are asked how many ingredient IDs from a list of IDs are fresh, according to a list of fresh ID ranges.*\n",
|
||||
"*In [**Day 5**](https://adventofcode.com/2025/day/5) we are asked how many ingredient IDs from a list of IDs are fresh, according to a list of fresh ID ranges.*\n",
|
||||
"\n",
|
||||
"*Claude produces a straightforward program that solves **Part 1** just fine and demonstrates good use of abstraction. This time it has nice doc strings; for Day 2 it had none. Go figure.*"
|
||||
]
|
||||
@ -1128,7 +1127,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 5.1: .0029 seconds, answer 635 correct"
|
||||
"Puzzle 5.1: .0029 seconds, correct answer: 635 "
|
||||
]
|
||||
},
|
||||
"execution_count": 23,
|
||||
@ -1227,7 +1226,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 5.2: .0001 seconds, answer 369761800782619 correct"
|
||||
"Puzzle 5.2: .0001 seconds, correct answer: 369761800782619"
|
||||
]
|
||||
},
|
||||
"execution_count": 25,
|
||||
@ -1245,11 +1244,11 @@
|
||||
"id": "b1503029-3a5f-4949-8502-75b051f78a23",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# [Day 6](https://adventofcode.com/2025/day/6): ChatGPT 5.1 Auto\n",
|
||||
"# Day 6: ChatGPT 5.1 Auto\n",
|
||||
"\n",
|
||||
"*We are asked to solve some math problems written in an unusal format (vertical instead of horizontal, with some special rules).*\n",
|
||||
"*For [**Day 6**](https://adventofcode.com/2025/day/6) we are asked to solve some math problems written in an unusal format (vertical instead of horizontal, with some special rules).*\n",
|
||||
"\n",
|
||||
"*For **Part 1** ChatGPT produced a program that is correct, but has poor abstraction, with one long 63-line function. (It also contains a pet peeve of mine: in lines 17–20 the pattern \"`if some_boolean: True else: False`\" can always be replaced with \"`some_boolean`\".)*"
|
||||
"*For **Part 1** ChatGPT produced a program that is correct, but has poor abstraction, with one long 63-line function. (It also contains a pet peeve of mine: in lines 17–20 the pattern \"`if some_boolean: True else: False`\" can always be replaced with \"`some_boolean`\".) And it would have been easier to replace the six lines with one: `sep = {c for c in range(width) if all(grid[r][c] == ' ' for r in range(h))}`."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -1345,7 +1344,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 6.1: .0034 seconds, answer 5877594983578 correct"
|
||||
"Puzzle 6.1: .0031 seconds, correct answer: 5877594983578 "
|
||||
]
|
||||
},
|
||||
"execution_count": 27,
|
||||
@ -1483,7 +1482,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 6.2: .0023 seconds, answer 11159825706149 correct"
|
||||
"Puzzle 6.2: .0023 seconds, correct answer: 11159825706149 "
|
||||
]
|
||||
},
|
||||
"execution_count": 29,
|
||||
@ -1501,9 +1500,9 @@
|
||||
"id": "110a8177-d4d8-4a61-9f74-1ed6444ec38f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# [Day 7](https://adventofcode.com/2025/day/7): Gemini 3 Pro\n",
|
||||
"# Day 7: Gemini 3 Pro\n",
|
||||
"\n",
|
||||
"*We are given a 2D grid of characters where a beam enters at the top and moves downward, but is split to both sides by a \"`^`\" character. We need to compute the total number of split beams at the bottom.*\n",
|
||||
"*In [**Day 7**](https://adventofcode.com/2025/day/7) we are given a 2D grid of characters where a beam enters at the top and moves downward, but is split to both sides by a \"`^`\" character. We need to compute the total number of split beams at the bottom.*\n",
|
||||
"\n",
|
||||
"*Gemini's code for **Part 1** is a bit verbose, but gets the job done.*"
|
||||
]
|
||||
@ -1597,7 +1596,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 7.1: .0004 seconds, answer 1681 correct"
|
||||
"Puzzle 7.1: .0004 seconds, correct answer: 1681 "
|
||||
]
|
||||
},
|
||||
"execution_count": 31,
|
||||
@ -1712,7 +1711,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 7.2: .0008 seconds, answer 422102272495018 correct"
|
||||
"Puzzle 7.2: .0009 seconds, correct answer: 422102272495018"
|
||||
]
|
||||
},
|
||||
"execution_count": 33,
|
||||
@ -1730,7 +1729,7 @@
|
||||
"id": "7320594a-1284-4b9a-9a22-c6d2829b5e03",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"*My friend Barry Hayes suggested I ask Gemini \"**How would you compare your code to this:\"***"
|
||||
"*My friend Barry Hayes suggested I ask Gemini to compare its code to mine. So I asked \"**How would you compare your code to this:\"***"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -1758,7 +1757,7 @@
|
||||
"id": "7a39ec7d-a27b-40ec-9d32-13f922cc764e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"*where this code is what I wrote for 7.2. Here is Gemini's reply:*\n",
|
||||
"*where this is what I wrote for 7.2. Here is Gemini's reply:*\n",
|
||||
"\n",
|
||||
"This is a great comparison. Your code is much more concise and \"Pythonic,\" using collections.Counter and in-place updates to reduce verbosity. It strips away a lot of the boilerplate that my solution included for robustness.\n",
|
||||
"\n",
|
||||
@ -1871,7 +1870,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 7.2: .0011 seconds, answer 422102272495018 correct"
|
||||
"Puzzle 7.2: .0011 seconds, correct answer: 422102272495018"
|
||||
]
|
||||
},
|
||||
"execution_count": 36,
|
||||
@ -1889,9 +1888,9 @@
|
||||
"id": "518c135a-6b5e-4602-b37f-c79c0573a9a1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# [Day 8](https://adventofcode.com/2025/day/8): Claude Opus 4.5\n",
|
||||
"# Day 8: Claude Opus 4.5\n",
|
||||
"\n",
|
||||
"*In this puzzle there is a collection of junction boxes with coordinates in 3D space. We are asked to connect them together, conneting the nerarest two first, and continuing for 1000 connections, and then report the product of the sizes of the three largest connected circuits.*\n",
|
||||
"*In the [**Day 8**](https://adventofcode.com/2025/day/8) puzzle there is a collection of junction boxes with coordinates in 3D space. We are asked to connect them together, conneting the nerarest two first, and continuing for 1000 connections, and then report the product of the sizes of the three largest connected circuits.*\n",
|
||||
"\n",
|
||||
"*Here's Claude's code for **Part 1**:*"
|
||||
]
|
||||
@ -2012,7 +2011,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 8.1: .2886 seconds, answer 24360 correct"
|
||||
"Puzzle 8.1: .2977 seconds, correct answer: 24360 "
|
||||
]
|
||||
},
|
||||
"execution_count": 38,
|
||||
@ -2151,7 +2150,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 8.2: .2857 seconds, answer 2185817796 correct"
|
||||
"Puzzle 8.2: .3003 seconds, correct answer: 2185817796 "
|
||||
]
|
||||
},
|
||||
"execution_count": 40,
|
||||
@ -2169,9 +2168,9 @@
|
||||
"id": "c6db8a6e-47bf-490f-a54c-6472b4f935a0",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# [Day 9](https://adventofcode.com/2025/day/9): ChatGPT 5.1 Auto\n",
|
||||
"# Day 9: ChatGPT 5.1 Auto\n",
|
||||
"\n",
|
||||
"*We are given the (x, y) coordsinates of a collection of red tiles on the floor, and asked what is the largest rectangle with two red tiles as corners.*\n",
|
||||
"*In [**Day 9**](https://adventofcode.com/2025/day/9) we are given the (x, y) coordsinates of a collection of red tiles on the floor, and asked what is the largest rectangle with two red tiles as corners.*\n",
|
||||
"\n",
|
||||
"*For **Part 1**, I was getting tired of all the programs that have a `main` that reads from input and prints the answer, so I told ChatGPT: **Refactor to have a function that takes the points as input and returns the area** and got this:*"
|
||||
]
|
||||
@ -2234,7 +2233,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 9.1: .0094 seconds, answer 4772103936 correct"
|
||||
"Puzzle 9.1: .0097 seconds, correct answer: 4772103936 "
|
||||
]
|
||||
},
|
||||
"execution_count": 42,
|
||||
@ -2442,7 +2441,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 9.2: .4376 seconds, answer 1529675217 correct"
|
||||
"Puzzle 9.2: .4522 seconds, correct answer: 1529675217 "
|
||||
]
|
||||
},
|
||||
"execution_count": 44,
|
||||
@ -2460,9 +2459,15 @@
|
||||
"id": "8e7b6f1b-0ab8-43ef-8b15-764473117b3a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# [Day 10](https://adventofcode.com/2025/day/10): Gemini 3 Pro\n",
|
||||
"# Day 10: Gemini 3 Pro\n",
|
||||
"\n",
|
||||
"*For [**Day 10**](https://adventofcode.com/2025/day/10) we are given some descriptions of machines. See [AoC](https://adventofcode.com/2025/day/10) or [my other notebook](Advent-2025.ipynb) for details, but the description:*\n",
|
||||
"\n",
|
||||
" [#....] (0,2,3) (0,2,3,4) (2,3) (0,1,2) (0,3,4) (3) (1,2) {75,18,60,71,39}\n",
|
||||
"\n",
|
||||
"*means that the machine has 5 lights, and the goal is to turn the first one on (`#....`), by pushing buttons. There are 7 buttons, the first one toggles lights 0, 2, and 3. We want to know the minimal number of button presses. The last 5 numbers are used only in Part 2, where they are the desired joltage of each light.*\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"*We are given some descriptions of machines; see [AoC Day 10](https://adventofcode.com/2025/day/10) or [my other notebook](Advent-2025.ipynb) for details.*\n",
|
||||
"\n",
|
||||
"*Gemini had no problem with **Part 1:***"
|
||||
]
|
||||
@ -2601,7 +2606,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 10.1: .0019 seconds, answer 441 correct"
|
||||
"Puzzle 10.1: .0019 seconds, correct answer: 441 "
|
||||
]
|
||||
},
|
||||
"execution_count": 46,
|
||||
@ -2621,7 +2626,7 @@
|
||||
"id": "f407a27f-f1ac-4c4a-bd46-649449c4dbf1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"***Part 2*** *was trickier. Gemini's first solution used z3 (it even gave nice instructions for how to pip install z3), but I responded with the prompt **Can you do it without using z3?** to which Gemini wrote its own Gaussian elimination code:*"
|
||||
"***Part 2*** *was trickier: now each button press increases the joltage of the each numbered light by 1 and we want to know the inimal number of presses to reach the joltage requirements. Gemini's first solution used z3 (it even gave nice instructions for how to `pip install z3`), but I responded with the prompt **Can you do it without using z3?** to which Gemini wrote its own Gaussian elimination code:*"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -2816,7 +2821,7 @@
|
||||
"id": "89366a12-507d-4730-9be9-df757bb999c6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"*The part that says `if not free_cols: search(0, []) else: search(0, [])` is a bit unsettling, and I'm not a big fan of `nonlocal` in this context, but the code works; the only downside is that it takes about 10 seconds to run.*"
|
||||
"*The part that says `if not free_cols: search(0, []) else: search(0, [])` is a bit unsettling, and I'm not a big fan of `nonlocal` in this context, but the code works; the only downside is that it takes about 3 seconds to run, a lot more than previous problems.*"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -2828,7 +2833,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 10.2: 3.4554 seconds, answer 18559 correct"
|
||||
"Puzzle 10.2: 3.5274 seconds, correct answer: 18559 "
|
||||
]
|
||||
},
|
||||
"execution_count": 48,
|
||||
@ -2948,7 +2953,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 10.2: .0461 seconds, answer 18559 correct"
|
||||
"Puzzle 10.2: .0480 seconds, correct answer: 18559 "
|
||||
]
|
||||
},
|
||||
"execution_count": 50,
|
||||
@ -3098,9 +3103,9 @@
|
||||
"id": "a23b652e-6250-4db1-8f1e-32d2cf77b4c5",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# [Day 11](https://adventofcode.com/2025/day/11): Claude Opus 4.5\n",
|
||||
"# Day 11: Claude Opus 4.5\n",
|
||||
"\n",
|
||||
"*We are given inputs like `qxn: mow khk`, whihc means that device `qxn` outputs to `mow` and `khk`, and are asked how many distinct output paths there are from the device named `you` to the device named `out`.*\n",
|
||||
"*For [**Day 11**](https://adventofcode.com/2025/day/11) we are given inputs like `qxn: mow khk`, whihc means that device `qxn` outputs to `mow` and `khk`, and are asked how many distinct output paths there are from the device named `you` to the device named `out`.*\n",
|
||||
"\n",
|
||||
"*Claude had no trouble solving **Part 1**. It even volunteered two possible implementations of `count_paths`. One thing was strange:*\n",
|
||||
"\n",
|
||||
@ -3211,7 +3216,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 11.1: .0003 seconds, answer 574 correct"
|
||||
"Puzzle 11.1: .0003 seconds, correct answer: 574 "
|
||||
]
|
||||
},
|
||||
"execution_count": 53,
|
||||
@ -3327,7 +3332,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 11.2: .0009 seconds, answer 306594217920240 correct"
|
||||
"Puzzle 11.2: .0011 seconds, correct answer: 306594217920240"
|
||||
]
|
||||
},
|
||||
"execution_count": 55,
|
||||
@ -3345,11 +3350,11 @@
|
||||
"id": "900226c6-ef8f-4be0-b3db-565d8f30c8b8",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# [Day 12](https://adventofcode.com/2025/day/12): ChatGPT 5.2 Auto\n",
|
||||
"# Day 12: ChatGPT 5.2 Auto\n",
|
||||
"\n",
|
||||
"***Note:*** *ChatGPT has a new version now: 5.2. I don't notice a big difference from 5.1, but I only did this one interaction.*\n",
|
||||
"***Note:*** *ChatGPT has a new version today: 5.2. I don't notice a big difference from 5.1, but I only did this one interaction.*\n",
|
||||
"\n",
|
||||
"*We are given some 3x3 grids describing the shapes of some oddly-shaped Christmas presents, thena re given some regions with given width and length, and asked if a specified number of presents of each kind can fit in the region.*\n",
|
||||
"*For [**Day 12**](https://adventofcode.com/2025/day/12) we are given some 3x3 grids describing the shapes of some oddly-shaped Christmas presents, thena re given some regions with given width and length, and asked if a specified number of presents of each kind can fit in the region.*\n",
|
||||
"\n",
|
||||
"*In my prompt I included my actual input, because that is key to the shortcut for solving the problem (which I covered in [**my notebook**](Advent-2025.ipynb)). ChatGPT didn't detect the shortcut and wrote code to rotate the shapes and search through possible placements. ChatGPT did have the check for `total_area > W * H`, so it is able to instantly reject the regions with too many presents (about half of them). But for the regions where there is a trivial fit into 3x3 squares, ChatGPT's code still tries to pack them in tightly rather than doing the simple layout.* "
|
||||
]
|
||||
@ -3606,7 +3611,7 @@
|
||||
"id": "2cbca4d7-773c-4027-8f27-270887180ee1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"*Below we see that ChatGPT's code works, but it takes 2 minutes to run:*"
|
||||
"*Kudos to ChatGPT for writing code that works, and for quickly rejecting regions where `total_area > W * H`. But by failing to immediately detect the cases where all the presents trivially fit into 3x3 boxes, the program takes two minutes to run, when it could have been done in under a millisecond. I'm not going to make you wait two minutes, but if you want to you can uncomment the code below:*"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -3614,31 +3619,11 @@
|
||||
"execution_count": 57,
|
||||
"id": "90ecec67-fac0-4ad4-9047-0c7c9344b30e",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Puzzle 12.1: 112.2115 seconds, answer 454 correct"
|
||||
]
|
||||
},
|
||||
"execution_count": 57,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"text = get_text(12)\n",
|
||||
"\n",
|
||||
"answer(12.1, 454, lambda:\n",
|
||||
" solve(text))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "26b54768-6a65-4ae3-9318-d24b40a30911",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"*Kudos to ChatGPT for writing code that works, and for quickly rejecting regions where `total_area > W * H`. But by failing to immediately detect the cases where all the presents trivially fit into 3x3 squares, this program takes over 3 minutes to run, when it could have been done in a millisecond.*"
|
||||
"# text = get_text(12)\n",
|
||||
"# answer(12.1, 454, lambda:\n",
|
||||
"# solve(text))"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -3690,7 +3675,7 @@
|
||||
"\n",
|
||||
"This is not a library. It’s a memory prosthetic.\n",
|
||||
"\n",
|
||||
"*Below I merge the three responses into one for the four areas where they all wrote very similar code, and then I give the functions that were unique to one LLM:*"
|
||||
"*Below I merge the three utility libraries into one for the four areas where they all wrote very similar code, and then I give the functions that were unique to each LLM:*"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -3894,11 +3879,13 @@
|
||||
"id": "8aa26008-a652-4860-9c84-5ba4344d32f3",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Summary\n",
|
||||
"# Run Times, LOC, and Notes\n",
|
||||
"\n",
|
||||
"*Here are the run times, lines-of-code counts, and some comments.*\n",
|
||||
"\n",
|
||||
"*The LLM lines-of-code count is 5 times the human count. The LLM run times are roughly double the human-written run times, if we throw out 12.1, where the human noticed the trick and the LLM didn't. But all the solutions run in under a second, so run time is not a big deal.*"
|
||||
"*The LLM run times are roughly double the human-written run times. (This is after throwing out 12.1, because the human interepreted it as \"solve my particular input\" and the LLM as \"solve any possible input.\")*\n",
|
||||
"\n",
|
||||
"*The LLM lines-of-code count is about 5 times the human count.*\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -3910,29 +3897,70 @@
|
||||
" | --- | ------ | ---- | ----- | --- | ----- | ---|\n",
|
||||
" | 1.1 | Gemini | .0007 | .0002 | 51 | 6 | Straightforward and easy for LLM and human. | \n",
|
||||
" | 1.2 | Gemini | .0008 | .0004 | 75 | 11 | Both LLM and human erred on the distance from 0 to 0. | \n",
|
||||
" | 2.1 | Claude | .0355 | .0001 | 29 | 17 | Easy | \n",
|
||||
" | 2.1 | Claude | .0355 | .0001 | 29 | 17 | | \n",
|
||||
" | 2.2 | Claude | .0403 | .0002 | 35 | 16 | Both LLM and human found the more efficient half-digits approach | \n",
|
||||
" | 3.1 | ChatGPT | .0019 | .0003 | 22 | 11 | Easy | \n",
|
||||
" | 3.2 | ChatGPT | .0026 | .0008 | 42 | 14 | Easy | \n",
|
||||
" | 4.1 | Gemini | .0084 | .0194 | 44 | 9 | Easy | \n",
|
||||
" | 3.1 | ChatGPT | .0019 | .0003 | 22 | 11 | | \n",
|
||||
" | 3.2 | ChatGPT | .0026 | .0008 | 42 | 14 | | \n",
|
||||
" | 4.1 | Gemini | .0084 | .0194 | 44 | 9 | | \n",
|
||||
" | 4.2 | Gemini | .0329 | .0495 | 52 | 8 | LLM chose the less efficient scan-whole-grid approach | \n",
|
||||
" | 5.1 | Claude | .0029 | .0045 | 45 | 11 | Easy | \n",
|
||||
" | 5.2 | Claude | .0001 | .0000 | 58 | 9 | Easy | \n",
|
||||
" | 6.1 | ChatGPT | .0034 | .0008 | 67 | 7 | Easy; bad “if x: True else: False” idiom by LLM | \n",
|
||||
" | 6.2 | ChatGPT | .0023 | .0013 | 87 | 27 | Easy; LLM overly verbose | \n",
|
||||
" | 7.1 | Gemini | .0004 | .0003 | 63 | 13 | Easy | \n",
|
||||
" | 7.2 | Gemini | .0011 | .0007 | 70 | 11 | Easy | \n",
|
||||
" | 8.1 | Claude | .2886 | .1981 | 91 | 27 | Easy | \n",
|
||||
" | 8.2 | Claude | .2857 | .2034 | 82 | 11 | Easy; but LLMs Union-Find data type runs slower than mine. | \n",
|
||||
" | 9.1 | ChatGPT | .0094 | .0187 | 33 | 7 | Easy | \n",
|
||||
" | 5.1 | Claude | .0029 | .0045 | 45 | 11 | | \n",
|
||||
" | 5.2 | Claude | .0001 | .0000 | 58 | 9 | | \n",
|
||||
" | 6.1 | ChatGPT | .0034 | .0008 | 67 | 7 | bad “if x: True else: False” idiom by LLM | \n",
|
||||
" | 6.2 | ChatGPT | .0023 | .0013 | 87 | 27 | LLM overly verbose | \n",
|
||||
" | 7.1 | Gemini | .0004 | .0003 | 63 | 13 | | \n",
|
||||
" | 7.2 | Gemini | .0011 | .0007 | 70 | 11 | | \n",
|
||||
" | 8.1 | Claude | .2886 | .1981 | 91 | 27 | | \n",
|
||||
" | 8.2 | Claude | .2857 | .2034 | 82 | 11 | but LLMs Union-Find data type runs slower than mine. | \n",
|
||||
" | 9.1 | ChatGPT | .0094 | .0187 | 33 | 7 | | \n",
|
||||
" | 9.2 | ChatGPT | .4376 | .0046 | 157 | 36 | LLM code a bit complicated; human uses “2 point” trick for speedup | \n",
|
||||
" | 10.1 | Gemini | .0019 | .0242 | 101 | 18 | Easy | \n",
|
||||
" | 10.1 | Gemini | .0019 | .0242 | 101 | 18 | | \n",
|
||||
" | 10.2 | Gemini | .0461 | .0680 | 70 | 13 | milp solutions similar; LLM offers other solutions | \n",
|
||||
" | 11.1 | Claude | .0003 | .0001 | 83 | 11 | Easy; LLM has a bit of vestigial code | \n",
|
||||
" | 11.2 | Claude | .0009 | .0010 | 77 | 11 | Easy | \n",
|
||||
" | 12.1 | ChatGPT | 112.2 | .0006 | 238 | 20 | Human saw shortcut to avoid search; LLM wrote search functions | \n",
|
||||
" | 11.1 | Claude | .0003 | .0001 | 83 | 11 | LLM has a bit of vestigial code | \n",
|
||||
" | 11.2 | Claude | .0009 | .0010 | 77 | 11 | | \n",
|
||||
" | 12.1 | ChatGPT | ----- | .0006 | 238 | 20 | Human used shortcut to avoid search; LLM wrote slow search | \n",
|
||||
" | **TOTAL** | |**1.204** | **.597** | **1672** | **324** | **Total time ignores 12.1. Overall, Human code is 5x briefer, 2x faster** | "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 59,
|
||||
"id": "f66db331-b68b-4588-908e-e561da114ecc",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Puzzle 1.1: .0007 seconds, correct answer: 1182 \n",
|
||||
"Puzzle 1.2: .0008 seconds, correct answer: 6907 \n",
|
||||
"Puzzle 2.1: .0383 seconds, correct answer: 23560874270 \n",
|
||||
"Puzzle 2.2: .0383 seconds, correct answer: 44143124633 \n",
|
||||
"Puzzle 3.1: .0020 seconds, correct answer: 17085 \n",
|
||||
"Puzzle 3.2: .0027 seconds, correct answer: 169408143086082\n",
|
||||
"Puzzle 4.1: .0088 seconds, correct answer: 1569 \n",
|
||||
"Puzzle 4.2: .0332 seconds, correct answer: 9280 \n",
|
||||
"Puzzle 5.1: .0029 seconds, correct answer: 635 \n",
|
||||
"Puzzle 5.2: .0001 seconds, correct answer: 369761800782619\n",
|
||||
"Puzzle 6.1: .0031 seconds, correct answer: 5877594983578 \n",
|
||||
"Puzzle 6.2: .0023 seconds, correct answer: 11159825706149 \n",
|
||||
"Puzzle 7.1: .0004 seconds, correct answer: 1681 \n",
|
||||
"Puzzle 7.2: .0011 seconds, correct answer: 422102272495018\n",
|
||||
"Puzzle 8.1: .2977 seconds, correct answer: 24360 \n",
|
||||
"Puzzle 8.2: .3003 seconds, correct answer: 2185817796 \n",
|
||||
"Puzzle 9.1: .0097 seconds, correct answer: 4772103936 \n",
|
||||
"Puzzle 9.2: .4522 seconds, correct answer: 1529675217 \n",
|
||||
"Puzzle 10.1: .0019 seconds, correct answer: 441 \n",
|
||||
"Puzzle 10.2: .0480 seconds, correct answer: 18559 \n",
|
||||
"Puzzle 11.1: .0003 seconds, correct answer: 574 \n",
|
||||
"Puzzle 11.2: .0011 seconds, correct answer: 306594217920240\n",
|
||||
"\n",
|
||||
"Time in seconds: sum = 1.246, mean = .057, median = .003, max = .452\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"summary(answers)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@ -15,7 +15,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 21,
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -62,16 +62,16 @@
|
||||
" - Applies `parser` to each section and returns the results as a tuple of records.\n",
|
||||
" - Useful parser functions include `ints`, `digits`, `atoms`, `words`, and the built-ins `int` and `str`.\n",
|
||||
" - Prints the first few input lines and output records. This is useful to me as a debugging tool, and to the reader.\n",
|
||||
" - The defaults are `parser=str, sections=lines`, so by default `parse(n)` gives a tuple of lines from fuile *day*."
|
||||
" - The defaults are `parser=str, sections=lines`, so by default `parse(n)` gives a tuple of lines from file \"AOC/*year*/input*n*.txt\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"current_year = 2023 # Subdirectory name for input files\n",
|
||||
"current_year = 2025 # Subdirectory name for input files\n",
|
||||
"\n",
|
||||
"lines = str.splitlines # By default, split input text into lines\n",
|
||||
"\n",
|
||||
@ -103,11 +103,21 @@
|
||||
" if show:\n",
|
||||
" types = Counter(map(type, items))\n",
|
||||
" counts = ', '.join(f'{n} {t.__name__}{\"\" if n == 1 else \"s\"}' for t, n in types.items())\n",
|
||||
" print(f'{hr}\\n{source} ➜ {counts}:\\n{hr}')\n",
|
||||
" if len(types) == 1 and hasattr(items[0], '__len__'):\n",
|
||||
" size = f' of size {describe_range(mapt(len, items))}'\n",
|
||||
" elif len(types) == 1 and hasattr(items[0], '__lt__'):\n",
|
||||
" size = f' in range {min(items)} to {max(items)}'\n",
|
||||
" else:\n",
|
||||
" size = ''\n",
|
||||
" print(f'{hr}\\n{source} ➜ {counts}{size}:\\n{hr}')\n",
|
||||
" for line in items[:show]:\n",
|
||||
" print(truncate(line))\n",
|
||||
" if show < len(items):\n",
|
||||
" print('...')"
|
||||
" print('...')\n",
|
||||
"\n",
|
||||
"def describe_range(numbers) -> str:\n",
|
||||
" mini, maxi = min(numbers), max(numbers)\n",
|
||||
" return str(mini) if mini == maxi else f'{mini} to {maxi}' "
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -119,20 +129,9 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"((9, 5), (123, 456))"
|
||||
]
|
||||
},
|
||||
"execution_count": 18,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"Char = str # Intended as the type of a one-character string\n",
|
||||
"Atom = Union[str, float, int] # The type of a string or number\n",
|
||||
@ -178,7 +177,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"execution_count": 17,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -204,11 +203,20 @@
|
||||
" \n",
|
||||
" def __repr__(self) -> str:\n",
|
||||
" \"\"\"The repr of an answer shows what happened.\"\"\"\n",
|
||||
" secs = f'{self.secs:7.4f}'.replace(' 0.', ' .')\n",
|
||||
" comment = (f'' if self.got == unknown else\n",
|
||||
" f' ok' if self.ok else \n",
|
||||
" f' WRONG; expected answer is {self.solution}')\n",
|
||||
" return f'Puzzle {self.puzzle:4.1f}: {secs} seconds, answer {self.got:<15}{comment}'"
|
||||
" secs = _zap0(f'{self.secs:7.4f}')\n",
|
||||
" correct = 'correct' if self.ok else 'WRONG!!'\n",
|
||||
" expected = '' if self.ok else f'EXPECTED: {self.solution}'\n",
|
||||
" return f'Puzzle {self.puzzle:4.1f}: {secs} seconds, {correct} answer: {self.got:<15}{expected}'\n",
|
||||
"\n",
|
||||
"def _zap0(field: str) -> str: return field.replace(' 0.', ' .')\n",
|
||||
"\n",
|
||||
"def summary(answers: dict):\n",
|
||||
" \"\"\"Summary report on the answers.\"\"\"\n",
|
||||
" for day in sorted(answers):\n",
|
||||
" print(answers[day])\n",
|
||||
" times = [answer.secs for answer in answers.values()]\n",
|
||||
" def stat(fn, times): return f'{fn.__name__} = {fn(times):.3f}'\n",
|
||||
" print('\\nTime in seconds:', ', '.join(_zap0(stat(fn, times)) for fn in (sum, mean, median, max)))"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -222,7 +230,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -356,7 +364,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -423,7 +431,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -502,7 +510,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -601,12 +609,12 @@
|
||||
"- Some neeed to know the sequence of intermediate states. \n",
|
||||
"- Some need to know the number of steps (or the total cost) to get to the final state.\n",
|
||||
"\n",
|
||||
"But sometimes you need all of that (or you think you might need it in Part 2), and sometimes you have a good heuristic estimate of the distance to a goal state, and you want to make sure to use it. If that's the case, then my `SearchProblem` class and `A_star_search` function may be approopriate."
|
||||
"But sometimes you need all of that (or you think you might need it in Part 2), and sometimes you have a good heuristic estimate of the distance to a goal state, and you want to make sure to use it. If that's the case, then my `SearchProblem` class and `A_star_search` function may be appropriate."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -634,7 +642,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -711,7 +719,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 17,
|
||||
"execution_count": 12,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -740,7 +748,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"execution_count": 13,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -755,7 +763,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -770,7 +778,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 20,
|
||||
"execution_count": 15,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -791,7 +799,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 21,
|
||||
"execution_count": 16,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -872,7 +880,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.8.15"
|
||||
"version": "3.13.3"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user