{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "
Peter Norvig
2019, revised 2024
Based on Yoav Goldberg's 2015 notebook
\n", "\n", "# The Effectiveness of Generative Language Models\n", "\n", "This notebook is an expansion of [**Yoav Goldberg's 2015 notebook**](https://nbviewer.org/gist/yoavg/d76121dfde2618422139) on character-level *n*-gram language models, which in turn was a response to [**Andrej Karpathy's 2015 blog post**](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) on recurrent neural network (RNN) language models. \n", "\n", "The term [**generative AI**](https://en.wikipedia.org/wiki/Generative_artificial_intelligencehttps://en.wikipedia.org/wiki/Generative_artificial_intelligence) is all the rage these days; it refers to computer programs that can *generate* something new (such as an image or a piece of text). \n", "\n", "In 2015 Karpathy's point was that recurrent neural networks were unreasonably effective at generating good text, even though they are at heart rather simple. Goldberg's point was that, yes, they are effective, but actually most of the magic is not in the RNNs, it is in the training data itself, and an even simpler model (with no neural nets) does just as well at generating English text. Goldberg and Karpathy agree that the RNN captures some aspects of C++ code that the simpler model does not. My point is to update the decade-old Python code, and make a few enhancements.\n", "\n", "\n", "## Definitions\n", "\n", "- A **generative language model** is a model that, when given an initial text, can predict what tokens come next; it can generate a continuation of a partial text. (And when the initial text is empty, it can generate the whole text.) In terms of probabilities, the model represents *P*(*t* | *h*), the probability distribution that the next token will be *t*, given a history of previous tokens *h*. The probability distribution is estimated by looking at a training corpus of text.\n", "\n", "- A **token** is a unit of text. It can be a single character (as covered by Karpathy and Goldberg) or more generally it can be a word or a part of a word (as allowed in my implementation).\n", "\n", "- A generative model stands in contrast to a **discriminative model**, such as an email spam filter, which can discriminate between spam and non-spam, but can't be used to generate a new sample of spam.\n", "\n", "\n", "- An **n-gram model** is a generative model that estimates the probability of *n*-token sequences. For example, a 5-gram character model would be able to say that given the previous 4 characters `'chai'`, the next character might be `'r'` or `'n'` (to form `'chair'` or `'chain'`). A 5-gram model is also called a model of **order** 4, because it maps from the 4 previous tokens to the next token.\n", "\n", "- A **recurrent neural network (RNN) model** is more powerful than an *n*-gram model, because it contains memory units that allow it to retain information from more than *n* tokens. See Karpathy for [details](http://karpathy.github.io/2015/05/21/rnn-effectiveness/).\n", "\n", "- Current **large language models** such as ChatGPT, Claude, Llama, and Gemini use an even more powerful model called a [transformer](https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29). Karpathy has [an introduction](https://www.youtube.com/watch?v=zjkBMFhNj_g&t=159s).\n", "\n", "## Training Data\n", "\n", "A language model learns probabilities by observing a corpus of text that we call the **training data**. \n", "\n", "Both Karpathy and Goldberg use the works of Shakespeare (about 800,000 words) as their initial training data:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " 167204 832301 4573338 shakespeare_input.txt\n" ] } ], "source": [ "! [ -f shakespeare_input.txt ] || curl -O https://norvig.com/ngrams/shakespeare_input.txt\n", "! wc shakespeare_input.txt \n", "# Print the number of lines, words, and characters" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "First Citizen:\n", "Before we proceed any further, hear me speak.\n", "\n", "All:\n", "Speak, speak.\n", "\n", "First Citizen:\n", "You are all resolved rather to die than to famish?\n" ] } ], "source": [ "! head -8 shakespeare_input.txt \n", "# First 8 lines" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Python Code for n-Gram Model\n", "\n", "I do some imports and then define two data types:\n", "- A `Token` is an individual unit of text, a string of one or more characters.\n", "- A `LanguageModel` is a subclass of `defaultdict` that maps a history of length *n* tokens to a `Counter` of the number of times each token appears immediately following the history in the training data." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "import random\n", "from typing import *\n", "from collections import defaultdict, Counter\n", "\n", "Token = str # Datatype to represent a token\n", "\n", "class LanguageModel(defaultdict): \n", " \"\"\"A mapping of {'history': Counter(next_token)}.\"\"\"\n", " def __init__(self, n: int):\n", " self.order = n\n", " super().__init__(Counter)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "I define two main functions that do essentially all the work:\n", "\n", "- `train_LM` takes a sequence of tokens (the training data) and an integer *n*, and builds a language model of order *n*, formed by counting the times each token *t* occurs and storing that under the entry for the history *h* of *n* tokens that precede *t*. \n", "- `generate_tokens` generates a random sequence of tokens, given an (optional) start sequence of tokens. At each step it looks at the history of previously generated tokens and chooses a new token at random from the language model's counter for that history." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "def train_LM(tokens, order: int) -> LanguageModel:\n", " \"\"\"Create and train a language model of given order on the given tokens.\"\"\"\n", " LM = LanguageModel(order)\n", " history = []\n", " for token in tokens:\n", " LM[cat(history)][token] += 1\n", " history = (history + [token])[-order:] \n", " return LM\n", "\n", "def generate_tokens(LM: LanguageModel, length=1000, start=()) -> List[Token]:\n", " \"\"\"Generate a random text of `length` tokens, with an optional start, from `LM`.\"\"\"\n", " tokens = list(start)\n", " while len(tokens) < length:\n", " history = cat(tokens[-LM.order:])\n", " tokens.append(random_sample(LM[history]))\n", " return tokens" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here are three auxiliary functions:\n", "- `gen` is a convenience function to call `generate_tokens`, concatenate the resulting tokens, and print them.\n", "- `random_sample` randomly chooses a single token from a Counter, with probability in proportion to its count.\n", "- `cat` is a utility function to concatenate strings (tokens) into one big string." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "def gen(LM: LanguageModel, length=1000, start=()) -> None:\n", " \"\"\"Call generate_tokens and print the resulting tokens.\"\"\"\n", " print(cat(generate_tokens(LM, length, start)))\n", " \n", "def random_sample(counter: Counter) -> Token:\n", " \"\"\"Randomly sample a token from the counter, proportional to each token's count.\"\"\"\n", " return random.choices(list(counter), weights=list(counter.values()), k=1)[0]\n", "\n", "cat = ''.join # Function to join strings together" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's train a character-level language model of order 4 on the Shakespeare data. We'll call the model `LM4`. (Note that saying `tokens=data` means that the sequence of tokens is equal to the sequence of characters in `data`; in other words each character is a token.)" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "data = open(\"shakespeare_input.txt\").read()\n", "\n", "LM = train_LM(tokens=data, order=4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here are some examples of what's in the model, for various 4-character histories:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Counter({'n': 78, 'r': 35})" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "LM[\"chai\"]" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'n'" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "random_sample(LM[\"chai\"])" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "Counter({'p': 1360,\n", " 's': 2058,\n", " 'l': 1006,\n", " 'o': 530,\n", " 'g': 1037,\n", " 'c': 1561,\n", " 'a': 554,\n", " 'C': 81,\n", " 'r': 804,\n", " 'h': 1029,\n", " 'R': 45,\n", " 'd': 1170,\n", " 'w': 1759,\n", " 'b': 1217,\n", " 'm': 1392,\n", " 'v': 388,\n", " 't': 1109,\n", " 'f': 1258,\n", " 'i': 298,\n", " 'n': 616,\n", " 'V': 18,\n", " 'e': 704,\n", " 'u': 105,\n", " 'L': 105,\n", " 'y': 120,\n", " 'A': 29,\n", " 'H': 20,\n", " 'k': 713,\n", " 'M': 54,\n", " 'T': 102,\n", " 'j': 99,\n", " 'q': 171,\n", " 'K': 22,\n", " 'D': 146,\n", " 'P': 54,\n", " 'S': 40,\n", " 'G': 75,\n", " 'I': 14,\n", " 'B': 31,\n", " 'W': 14,\n", " 'E': 77,\n", " 'F': 103,\n", " 'O': 3,\n", " \"'\": 10,\n", " 'z': 6,\n", " 'J': 30,\n", " 'N': 18,\n", " 'Q': 7})" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "LM[\"the \"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So `\"chai\"` is followed by either `'n'` or `'r'`, and almost any letter can follow `\"the \"`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Generating Shakespeare\n", "\n", "We cann generate a random text from the order 4 character model:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "First, hypocrity.\n", "\n", "Messenge a bear she is, malice for the people's lion!\n", "\n", "TALBOT:\n", "Thou dwell and,\n", "And liest felt malice, by Cleon, then, sir?\n", "\n", "QUEEN:\n", "Yes, we press might\n", "It may purch:\n", "A coward and Valenting kiss,\n", "And perils house,\n", "Till deeds as dost ther;\n", "And I, to keep hour could nevership not yet, I this cause.\n", "\n", "Solicy.\n", "\n", "NYM:\n", "To his not my lord\n", "'her beggars never hear us\n", "to do\n", "Not sell with the now I furthen here that thou all.\n", "\n", "KING RICHARD II:\n", "Go, but boy lords this kins\n", "Answer unkindness.'\n", "Plant reason\n", "Are mayst pleasure\n", "What cousiness come inst the devils, and I say, his spurn by serves;\n", "For and upon't her than lord.\n", "\n", "QUEEN:\n", "Then, nothing of he will leaving.\n", "\n", "SIR TOBY BELCH:\n", "Come of an even to portculling end in are a dog; and thither with grief, heart is much vicion! By their sick with\n", "corrupt,\n", "But my lords: but home\n", "Are than and shot as the once inter way I\n", "shall,\n", "This fruit.\n", "\n", "LONGAVILLES:\n", "'Tis yet at hatility the eachelor our plead me, the cram with mattended grace and my yet \n" ] } ], "source": [ "gen(LM)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Order 4 captures the structure of plays, mentions some characters, and generates mostly English words. But the words don't always go together to form grammatical sentences, and there is certainly no coherence or plot. \n", "\n", "## Generating Order 7 Shakespeare\n", "\n", "What if we increase the model to order 7? Or 10? The output gets a bit better, roughly as good as the RNN models that Karpathy shows, and all from a much simpler *n*-gram model." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "First Clown:\n", "What are so envenom'd: therefore, good Ursula\n", "Walk in thy opinion and hit\n", "The cheater: but lechery eats in my person's sacred king.\n", "Ay me! sad hours must fight no more.\n", "\n", "OLIVIA:\n", "Are you think you; you well:\n", "A gallant ship, sir; you have not denies\n", "The discourse our supposed\n", "He that makes King Edward from our counsel in my close they will this young daughter is my birth;\n", "But no man have I here resignation of them: while apart.\n", "Stand in this I challenge much as to mince nought;\n", "Watch'd you,\n", "Your displeasures, and all the heart bleed at Plashy too;\n", "My operant power unto Octavius' tent\n", "How the hand\n", "Of him that I were you venture of muttons, be your leave,\n", "For which the anvil of him; you shall acquitted\n", "Without book are adventurously bound together.\n", "\n", "FORD:\n", "Marry, we have a parish-top. What is the matters. This curse thought of revenge,\n", "For governed from the proclaim it civil dissemblies to trust not colour will.\n", "\n", "TITANIA:\n", "My good king, from the very know, sir king of her lips,\n", "\n" ] } ], "source": [ "gen(train_LM(data, order=7))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Generating Order 10 Shakespeare" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "First Citizen:\n", "We are blest in peace and honour than\n", "Your gates against an alien\n", "That by direct or by collateral hand\n", "They find us touch'd, or carved to thee.\n", "\n", "CARDINAL WOLSEY:\n", "Your grace hath blessed and engaged to many Greeks,\n", "Even in these cases, where in gore he lay insteep'd,\n", "And take it.\n", "\n", "PROTEUS:\n", "My gracious king:\n", "And I do wish\n", "That your pains the hire;\n", "If you do wrong you? alas, our places.\n", "\n", "SATURNINUS:\n", "Why, worthy Margaret, that is the mad mothers than snow,\n", "And all the posterns\n", "Clear them out.\n", "\n", "All:\n", "A heavy reckoning to make\n", "Mine eyes too, examined my parts with death, goodman Dull.\n", "\n", "DULL:\n", "Which is not confess she does: there's another place\n", "And find me well deliver:\n", "Mark Antony, she pursed up\n", "his heart think it cites us, brother, I go; I'll win them, fear it not;\n", "Let not thy nature; let not every man's Hero:\n", "\n", "CLAUDIO:\n", "I know him; 'tis a mere scutcheon: and so\n", "ends my catechism.\n", "\n", "EARL OF DOUGLAS:\n", "'Faith, that hath gull'd thee there.\n", "But, room, fairy! here comes your father's \n" ] } ], "source": [ "gen(train_LM(data, order=10))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Aside: Probabilities and Smoothing\n", "\n", "Sometimes we'd rather see probabilities, not raw counts. Given a language model `LM`, the probability *P*(*t* | *h*) can be computed as follows:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "def P(t, h, LM: LanguageModel): \n", " \"The probability that token t follows history h.\"\"\"\n", " return LM[h][t] / sum(LM[h].values())" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.09286165508528112" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "P('s', 'the ', LM)" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.6902654867256637" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "P('n', 'chai', LM)" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.30973451327433627" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "P('r', 'chai', LM)" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.0" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "P('s', 'chai', LM)" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.0" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "P(' ', 'chai', LM)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Shakespeare never wrote about \"chaise longues,\" or \"chai tea\" so the probability of an `'s'` or `' '` following `'chai'` is zero, according to our language model. But do we really want to say it is absolutely impossible for the sequence of letters `'chais'` or `'chai '` to appear in a generated text, just because we didn't happen to see it in our training data? More sophisticated language models use [**smoothing**](https://en.wikipedia.org/wiki/Kneser%E2%80%93Ney_smoothing) to assign non-zero (but small) probabilities to previously-unseen sequences. In this notebook we stick to the basic unsmoothed model." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Aside: Starting Text\n", "\n", "One thing you may have noticed: all the generated passages start the same. Why is that? Because the training data happens to start with the line \"First Citizen:\", and so when we call `generate_tokens`, we start with an empty history, and the only thing that follows the empty history in the training data is the letter \"F\", the only thing that follows \"F\" is \"i\", and so on, until we get to a point where there are multiple choices. We could get more variety in the start of the generated text by breaking the training text up into multiple sections, so that each section would contribute a different possible starting point. But that would require some knowledge of the structure of the training text; right now the only assumption is that it is a sequence of tokens/characters.\n", "\n", "We can give a starting text to `generate_tokens` and it will continue from there. But since the models only look at a few characters of history (just 4 for `LM`), this won't make much difference. For example, the following won't make the model generate a story about Romeo:" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ROMEO:\n", "What swifter of so much a doubled,\n", "And I may, the dare, by the lives done own rich, that you naughter-out appearancess of your turn us,\n", "Whose heave born blow be prology.\n", "Keeper: proud.\n", "\n", "BALTHAZAR:\n", "Let so shall a picture we mercy,\n", "And the king this bondage an e'er that I storator:\n", "\n", "FERDINAL CAMPEIUS:\n", "First Servant:\n", "I this on your end\n", "Louder with methink I do bid it: if heavens:\n", "shall this vest; and run, my troops then, good and so forty wife.\n", "Invited.\n", "\n", "MACBETH:\n", "Avaunt,\n", "And Here is days.\n", "\n", "ORLEANS:\n", "And friend;\n", "Dismissing, she detain the time thou scourselves itself:\n", "but with the gent the half abused!\n", "\n", "QUEEN:\n", "Their daughteous most in my lost fight\n", "The prey to say you give himselves Lord's the comes bold your look the lady, look you, Fabian:\n", "A pass!\n", "\n", "EMILIA:\n", "Is heater is they my him; and chard for my lord, ripe to Pompey, I and let us have your woo?\n", "\n", "Think upon: indeed? Do noison.\n", "'Tis most in a suddenly out, sir; you are\n", "A cist\n", "For noble Antent our\n", "daughter disclose him.\n", "\n", "CELIA:\n", "Dou\n" ] } ], "source": [ "gen(LM, start='ROMEO')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Linux Kernel C++ Code\n", "\n", "Goldberg's point is that the simple character-level n-gram model performs about as well as the more complex RNN model on Shakespearean text. \n", "\n", "But Karpathy also trained an RNN on 6 megabytes of Linux-kernel C++ code. Let's see what we can do with that training data." ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " 241465 759639 6206997 linux_input.txt\n" ] } ], "source": [ "! [ -f linux_input.txt ] || curl -O https://norvig.com/ngrams/linux_input.txt\n", "! wc linux_input.txt" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "linux = open(\"linux_input.txt\").read()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Generating Order 10 C++\n", "\n", "We'll start with an order-10 character model, and compare that to an order-20 model. We'll generate a longer text, because sometimes 1000 characters ends up being just one long comment." ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "/*\n", " * linux/kernel.h>\n", "#include \n", "#include irq_unmask\t= noop,\n", "\t.irq_enable)\n", "\t\tclear_frozen = false;\n", "\n", "\tif (time_before(end_time, timer_t, timer_id, &flag);\n", "\tif (err)\n", "\t\treturn;\n", "\n", "\ttracing_resize_ring_buffer_record_disabled)\n", "\t\treturn;\n", "\tif (desc) {\n", "\t\traw_spin_lock_irqsave(&tracepoint_str + (*pos - 1);\n", "\ti++;\n", "\n", "find_first_elem;\n", "\t}\n", "\n", "\t/* sort them */\n", "\tarray_desc\n", "#define CAP_PI\t\t(void *)rec->ip);\n", "\t\t\t/* Ftrace is shutting down system\");\n", "\tshutdown_task != NULL)\n", "\t\tevent->rcu_pending()) {\n", "\t\terr = -EINVAL;\n", "}\n", "\n", "/* Special cases that /proc allows\n", "\t * for built-in exception is the concept of\n", "a \"sequence count */\n", "\t\titer->trace is a kernel\n", " * and userspace address.\n", "\t\t * Either we have now waited for the first call.\n", " * Return: %false if it was a timeout or signal will be freed in case of UMH_NO_WAIT)\t/* task has the given prefix.\n", " *\t0 if no string for all tasks in the system configuration.\n", " *\n", " * Use this only\n", " * one call to synchronized by task_work_add()\n", " *\n", " * Flush any pending */\n", "\tdiv = div_s64(offset64 << NTP_SCALE_SHIFT. */\n", "extern unsigned long msleep_interruptible(\n", "\t\t\trnp->nocb_gp_tail;\n", "\tbool nocb_leader = rdp_spawn)\n", "\t\t\t\trdp_last = rdp;\n", "\t\t\t\trdp = rdp->nocb_follower_head;\n", "}\n", "\n", "/*\n", " * RCU global state. */\n", "\n", "/*\n", " * Priority Inheritance state:\n", " */\n", "struct task_struct *p, int user_namespace *user_ns, struct task_struct() in task_numa_env env = {\n", "\t\t.p = p,\n", "\n", "\t\t.src_cpu = busiest;\n", "\n", "out_balanced:\n", "\t\t/*\n", "\t\t * If the scheduled. */\n", "\tMULTI_STOP_PREPARE, cpu);\n", "\tBUG_ON(event->parent)\n", "\t\tgoto out;\n", "\t}\n", "\ttrace_access_lock, cpu);\n", "\tatomic_inc(&buffer->mutex);\n", "\tINIT_LIST_HEAD(&cset->mg_tasks);\n", "\t\tfakewriters, 4, \"Number of page frames total] + PAGES_FOR_IO)) / 2\n", "\t\t\t- 2 * DIV_ROUND_UP(NR_CPUS, RCU_FANOUT;\n", "\n", "\t/*\n", "\t * The taskstats *mk_reply(struct sched_rt_entity *rt_se)\n", "{\n", "\tstruct fd output;\n", "\t\t\tret = blk_trace_getrq, NULL);\n", "\tcd.wrap_kt, HRTIMER_MODE_ABS_PINNED, 0);\n", "}\n", "\n", "/**\n", " * ptrace_trap_notify(t);\n", "\t\t\t}\n", "\t\t\tif (later_mask = mask;\n", "\tcurrent->curr_ret_stack, cpu);\n", "\t}\n", "\tpreempt_disable_cmds(void)\n", "{\n", "\treturn wl;\n", "}\n", "#else\n", "static inline void module_unload_free(struct module *mod;\n", "\n", "\tmod = __module_get(dev->owner))\n", "\t\treturn -EINVAL;\n", "\n", "\traw_spin_unlock(&cfs_b->lock\n", " * must be able to reap\n", "\t * it yet.\n", "\t */\n", "\tftrace_hash(new_hash);\n", "\t\t}\n", "\t\tmemory_bm_clear_bit(KTHREAD_IS_PARKED, &self->flags)) {\n", "\t\tif (desc->wake_depth++ == 0) {\n", "\t\t/* We should not be called unless tick_nohz_full_enabled() || rt_b->rt_runtime == period, ie unlimited)\\n\"\n", "#endif\n", "\t\"\\n trace_marker\\t\\t- Write 0/1 to enable function */\n", "\tcur_stack >= env->prog->aux->ops->\n", "\t\t\tconvert_ctx_access(int off, int size,\n", "\t\t\t\tenv->flags &= ~mask;\n", "}\n", "\n", "static const struct cpupri *cp,\n", "\t\t struct held_lock *hlock)\n", "{\n", "\treturn __wakeup_reset(tr);\n", "\n", "\tset_tracer_flags->val;\n", "\ttrace_on = tracing_open,\n", "\t.read\t\t= seq_read,\n", "\t.write\t\t= kgdb_console_write,\n", "\t.seq_show = cpuset_read_u64,\n", "\t\t.write_u64 = cpuset_write_res\n" ] } ], "source": [ "gen(train_LM(linux, order=10), length=3000)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Order 20 C++" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "/*\n", " * linux/kernel/irq/proc.c\n", " *\n", " * Copyright (C) 2009 Jason Baron \n", " * Copyright (C) 2009 Jason Baron \n", " * Copyright (C) 2006 Rafael J. Wysocki \n", " *\n", " * This file is released under the GPLv2.\n", " */\n", "\n", "#include \n", "#include \n", "#include \n", "\n", "#include \n", "#include \n", "\n", "#include \"trace_probe.h\"\n", "\n", "#define KPROBE_EVENT_SYSTEM);\n", "\tif (WARN_ON_ONCE(ret)) {\n", "\t\tpr_warn(\"error enabling all events\\n\");\n", "\t\treturn;\n", "\t}\n", "\n", "\tcancel_delayed_work_sync(&req->work);\n", "\n", "\ttrace_pm_qos_update_request(req->pm_qos_class,\n", "\t\t\t\t\t new_value, timeout_us);\n", "\tif (new_value != req->node.prio)\n", "\t\tpm_qos_update_target(struct pm_qos_constraints network_tput_constraints,\n", "\t.name = \"memory_bandwidth\",\n", "};\n", "\n", "\n", "static struct pm_qos_object memory_bandwidth_pm_qos,\n", "};\n", "\n", "static ssize_t\n", "tracing_write_stub(struct file *filp, const char __user *ubuf, size_t cnt,\n", "\t\t loff_t *ppos)\n", "{\n", "\tint ret = -ENODEV;\n", "\n", "\tmutex_lock(&trace_types_lock);\n", "\n", "\ttr->current_trace = &nop_trace;\n", "\n", "#ifdef CONFIG_BRANCH_TRACER\n", "int\n", "trace_selftest_startup_wakeup,\n", "#endif\n", "\t.open\t\t= wakeup_trace_open,\n", "\t.close\t\t= graph_trace_close,\n", "\t.init\t\t= graph_trace_init,\n", "\t.reset\t\t= function_trace_start,\n", "\t.flags\t\t= &func_flags,\n", "\t.set_flag\t= wakeup_set_flag,\n", "\t.flag_changed\t= irqsoff_flag_changed,\n", "#ifdef CONFIG_FTRACE_SELFTEST\n", "\t.selftest = trace_selftest_startup_preemptirqsoff(struct tracer *tracer = tr->current_trace;\n", "\tinfo->iter.trace_buffer = &tr->trace_buffer;\n", "\tinfo->spare\t\t= NULL;\n", "\t/* Force reading ring buffer for first read */\n", "\tinfo->read\t\t= (unsigned int)-1;\n", "\n", "\tfilp->private_data = dir;\n", "\n", "\treturn 0;\n", "}\n", "\n", "static void mmio_trace_start(struct trace_iterator *iter, int flags,\n", "\t\t struct timespec *tp)\n", "{\n", "\treturn posix_cpu_clock_get,\n", "\t.timer_create\t= posix_cpu_timer_create(&timer);\n", "\ttimer.it_process = current;\n", "\tif (!error) {\n", "\t\tstatic struct itimerspec zero_it;\n", "\n", "\t\tmemset(it, 0, sizeof *it);\n", "\t\tit->it_value = *rqtp;\n", "\n", "\t\tspin_lock_irq(&callback_lock);\n", "\n", "\tif (!cpumask_empty(desc->percpu_enabled));\n", "\t\tgoto bad;\n", "\t}\n", "\n", "\t/* Found it - now remove it from the stack, and add back the other\n", "\t * entries (if any), recalculating the hash along the way:\n", "\t */\n", "\n", "\tcurr->lockdep_depth++;\n", "\tcheck_chain_key(curr);\n", "}\n", "\n", "static int __lock_is_held(struct lockdep_map *lock)\n", "{\n", "\tstruct task_struct *p, int head)\n", "{\n", "\tstruct sched_rt_entity *rt_se)\n", "{\n", "\treturn !list_empty(&pool->worklist) &&\n", "\t\tatomic_read(&pool->nr_running);\n", "\t}\n", "}\n", "\n", "static void update_cond_flag(struct ftrace_event_field *field;\n", "\tstruct list_head *head_page, *prev_page, *r;\n", "\t\tstruct list_head *firing,\n", "\t\t unsigned long len,\n", "\t\t\t unsigned long len,\n", "\t\t\t\t struct load_info *info)\n", "{\n", "}\n", "\n", "static void add_kallsyms(struct module *mod)\n", "{\n", "\tdel_usage_links(mod);\n", "\tadd_sect_attrs(mod, info);\n", "\tadd_notes_attrs(mod, info);\n", "\n", "\tkobject_uevent(&mod->mkobj.kobj, 0, sizeof(mod->mkobj.kobj));\n", "\tmod->mkobj.kobj.kset = module_kset;\n", "\t\terr = kobject_init_and_add(&mk->kobj, &module_uevent.attr);\n", "#endif\n", "\t\tif (err) {\n", "\t\t\tkobject_put(&mk->kobj);\n", "\t\t}\n", "\t}\n", "}\n", "\n", "/* module\n" ] } ], "source": [ "gen(train_LM(linux, order=20), length=3000)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Analysis of Generated Linux Text\n", "\n", "As Goldberg says, \"Order 10 is pretty much junk.\" But order 20 is much better. Most of the comments have a start and an end; most of the open parentheses are balanced with a close parentheses; but the braces are not as well balanced. That shouldn't be surprising. If the span of an open/close parenthesis pair is less than 20 characters then it can be represented within the model, but if the span of an open/close brace is more than 20 characters, then it cannot be represented by the model. Goldberg notes that Karpathy's RRN seems to have learned to devote some of its long short-term memory (LSTM) to representing nesting level, as well as things like whether we are currently within a string or a comment. It is indeed impressive, as Karpathy says, that the model learned to do this on its own, without any input from the human engineer.\n", "\n", "## Token Models versus Character Models\n", "\n", "Karpathy and Goldberg both used character models, because the exact formatting of characters (especially indentation and line breaks) is important in the format of plays and C++ programs. But if you are interested in generating paragraphs of text that don't have any specific format, it is common to use a **word** model, which represents the probability of the next word given the previous words, or a **token** model in which tokens can be words, punctuation, or parts of words. For example, the text `\"Spiderman!\"` might be broken up into the three tokens `\"Spider\"`, `\"man\"`, and `\"!\"`. \n", "\n", "One simple way of tokenizing a text is to break it up into alternating strings of word and non-word characters; the function `tokenize` does that by default:" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [], "source": [ "import re\n", "\n", "word_or_nonword = r'\\w+|\\W+' # Regular expression to parse a string of either word or non-word characters.\n", "\n", "def tokenize(text: str, regex=word_or_nonword) -> List[Token]: \n", " \"\"\"Break text up into tokens using regex.\"\"\"\n", " return re.findall(regex, text)" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [], "source": [ "assert tokenize('Soft! who comes here?') == [\n", " 'Soft', '! ', 'who', ' ', 'comes', ' ', 'here', '?']\n", "\n", "assert tokenize('wherefore art thou ') == [\n", " 'wherefore', ' ', 'art', ' ', 'thou', ' ']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can train a token model on the Shakespeare data. A model of order 6 keeps a history of up to three word and three non-word tokens. " ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [], "source": [ "TLM = train_LM(tokenize(data), order=6)" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Counter({'Romeo': 1})" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "TLM['wherefore art thou ']" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Counter({'stars': 1, 'Grecian': 1})" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "TLM['not in our ']" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Counter({'life': 1, 'business': 1, 'dinner': 1, 'time': 1})" ] }, "execution_count": 29, "metadata": {}, "output_type": "execute_result" } ], "source": [ "TLM['end of my ']" ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Counter({' ': 2})" ] }, "execution_count": 30, "metadata": {}, "output_type": "execute_result" } ], "source": [ "TLM[' end of my']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We see below that the quality of the token models is similar to character models, and improves from 6 tokens to 8:" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "First Citizen:\n", "Before we proceed any further, hear me speak,\n", "Before you answer Warwick. His demand\n", "Springs not from Edward's well-meant honest love,\n", "But from deceit bred by necessity;\n", "For how can I grace my talk,\n", "Wanting a hand to hold a sceptre up\n", "And with the clamour of thy drum,\n", "And even at hand a drum is ready braced\n", "That shall reverberate all as loud as Mars. By Jupiter,\n", "Were I the Moor, I would not be awaked.\n", "\n", "LORENZO:\n", "That is the very note of it: and it is known she is, these moral laws\n", "Of nature and of nations, 'long\n", "To him and his virtue;\n", "By her election may be truly read\n", "What kind of man,\n", "So keen and greedy to confound a man:\n", "He plies the duke at dinner: by two o'clock I'll get me such a question: stand again:\n", "Think'st thou I am an old man's life is done:\n", "Then, dear my liege, mine honour let me try;\n", "In that I live and for that will I cause these of Cyprus to\n", "mutiny; whose qualification shall come into no true\n", "taste again but by the recorder.\n", "Then he was urged to tell my tale again,\n", "'Thus saith the duke, thus hath the duke inferr'd;'\n", "But nothing spake in warrant from himself.\n", "When he had done, some followers of mine own lie heavy in my breast\n", "And go well satisfied to France again.\n", "\n", "PRINCESS:\n", "You do the nobler.\n", "\n", "CORIOLANUS:\n", "I muse my Lord of Winchester, I charge you,\n", "Not fearing the displeasure of your master,\n", "Which on your just proceeding I'll keep what I have said to you?\n", "\n", "OPHELIA:\n", "So please you, it is true, indeed.\n", "\n", "GRATIANO:\n", "'Tis a strange serpent.\n", "\n", "MARK ANTONY:\n", "'Tis said, man; and farewell.\n", "\n", "EROS:\n", "Farewell, great chief. Shall I strike at it with you, and so following, but I will murder your ruff for this.\n", "\n", "FALSTAFF:\n", "No more, Pistol; I would not\n", "Believe her lips in opening it. Proceed.\n", "\n", "CORNELIUS:\n", "Your daughter, whom she bore in hand to love\n", "With such integrity, she did confess\n", "Was as a scorpion to her sight; whose life,\n", "But that the heavens fought: the king himself\n", "Of his wings destitute, the army broken,\n", "And but the backs of Britons seen, all flying\n", "Through a straight lane; the enemy full-hearted,\n", "Lolling the tongue with slaughtering, having work\n", "More plentiful than tools to do't?\n", "\n", "First Lord:\n", "You must not think\n", "That we are made of stuff so flat and dull\n", "That we can let our beard be shook with danger\n", "And think it not the worst of me. So, I leave you, sir.\n", "\n", "VINCENTIO:\n", "You shall not now be stol'n away to Rome; hath ta'en thy stand,\n", "The elected deer before thee?\n", "\n", "PISANIO:\n", "But to win time\n", "To lose so bad employment; in the which\n", "I do arrest you, sir: you might have heard it said, unbidden guests\n", "Are often welcomest when they are full,\n", "They \n" ] } ], "source": [ "gen(TLM)" ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "First Citizen:\n", "Before we proceed any further, hear me speak.\n", "\n", "All:\n", "Peace, ho! Hear Antony. Most noble Antony!\n", "\n", "ANTONY:\n", "Why, friends, you go to do you know not what\n", "you speak. But, if ever the duke return, as our\n", "prayers are he may, let me desire you to make your\n", "answer before him. If it be honest you have spoke,\n", "you have courage to maintain it: I am bound to wonder, I am bound\n", "To Persia, and want guilders for my voyage:\n", "Therefore make present satisfaction,\n", "Or I'll attach you by this officer.\n", "\n", "ANTIPHOLUS OF EPHESUS:\n", "I will debate this matter at more leisure\n", "And teach your ears to list me with more heed.\n", "To Adriana, villain, hie thee straight:\n", "Give her this key, and tell her, in the desk\n", "That's cover'd o'er with Turkish tapestry,\n", "There is a purse of ducats; let her send it:\n", "Tell her I am arrested in the street\n", "And that shall bail me; hie thee, slave, be gone!\n", "On, officer, to prison till it come.\n", "\n", "DROMIO OF SYRACUSE:\n", "To Adriana! that is where we dined,\n", "Where Dowsabel did claim me for her husband:\n", "She is too big, I hope, for me to compass.\n", "Thither I must, although against my will,\n", "For servants must their masters' minds fulfil.\n", "\n", "ADRIANA:\n", "Ah, Luciana, did he tempt thee so?\n", "Mightst thou perceive austerely in his eye\n", "That he did plead in earnest? yea or no?\n", "Look'd he or red or pale, or sad or merrily,\n", "Interpretation will misquote our looks,\n", "And we shall feed like oxen at a stall,\n", "The better cherish'd, still the nearer death.\n", "My nephew's trespass may be well forgot;\n", "it hath the excuse of youth and heat of blood,\n", "And an adopted name of privilege,\n", "A hair-brain'd Hotspur, govern'd by a child!\n", "\n", "Second Citizen:\n", "In him there is a hope of government,\n", "That in his nonage council under him,\n", "And in his full and ripen'd years himself,\n", "No doubt, shall then and till then govern well.\n", "\n", "First Citizen:\n", "So stood the state when Henry the Sixth\n", "Was crown'd in Paris but at nine months old.\n", "\n", "Third Citizen:\n", "Stood the state so? No, no, good friends, God wot;\n", "For then this land was famously enrich'd\n", "With politic grave counsel; then the king\n", "Had virtuous uncles to protect his grace.\n", "\n", "First Citizen:\n", "Why, so hath this, both by the father and mother.\n", "\n", "Third Citizen:\n", "Better it were they all came by the father,\n", "Or by the father there were none at all;\n", "For emulation now, who shall be nearest,\n", "Will touch us all too near, if God prevent not.\n", "O, full of danger is the Duke of Norfolk:\n", "Ratcliff, thyself, or Catesby; where is he?\n", "\n", "CATESBY:\n", "Here, my lord.\n", "\n", "KING RICHARD III:\n", "Then, by myself--\n", "\n", "QUEEN ELIZABETH:\n", "Thyself thyself misusest.\n", "\n", "KING RICHARD III:\n", "Why, Buckingham, I say, I would be king,\n", "\n", "BUCKINGHAM:\n", "Why, so you are, my thrice renowned liege.\n", "\n", "KING RICHARD III:\n", "\n" ] } ], "source": [ "gen(train_LM(tokenize(data), 8))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## C++ Token Model\n", "\n", "Similar remarks hold for token models trained on C++ data:" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "/*\n", " * linux/kernel/irq/autoprobe.c\n", " *\n", " * Copyright (C) 1992, 1998-2006 Linus Torvalds, Ingo Molnar\n", " *\n", " * This file contains the IRQ-resend code\n", " *\n", " * If the interrupt is waiting to be processed, we try to re-run it.\n", " * We can't directly run it from here since the caller might be in an\n", " * interrupt-protected region. Not all irq controller chips can\n", " * retrigger interrupts at the hardware level, so in those cases\n", " * we allow the resending of IRQs via a tasklet.\n", " */\n", "\n", "#include \n", "#include \n", "#include \n", "#include \n", "#include \n", "#include \n", "#include \n", "#include \n", "#include \n", "\n", "#include \"trace.h\"\n", "\n", "static DEFINE_PER_CPU(int, bpf_prog_active);\n", "\n", "/**\n", " * trace_call_bpf - invoke BPF program\n", " * @prog: BPF program\n", " * @ctx: opaque context pointer\n", " *\n", " * kprobe handlers execute BPF programs via this helper.\n", " * Can be used from static tracepoints in the future.\n", " *\n", " * Return: BPF programs always return an integer which is interpreted by\n", " * kprobe handler as:\n", " * 0 - return from kprobe (event is filtered out)\n", " * 1 - store kprobe event into ring-buffer,\n", "\t\t * so return zero here\n", "\t\t */\n", "\t\tret = 0;\n", "\t\tgoto out;\n", "\t}\n", "\n", "\tif (trigger) {\n", "\t\tnumber = strsep(&trigger, \":\");\n", "\n", "\t\tret = -EINVAL;\n", "\t\tif (!strlen(number))\n", "\t\t\tgoto out_free;\n", "\n", "\t\t/*\n", "\t\t * We use the callback data field (which is a pointer)\n", "\t\t * as our counter.\n", "\t\t */\n", "\t\tret = kstrtoul(number, 0, &data->count);\n", "\tif (ret)\n", "\t\tgoto out_free;\n", "\n", " out_reg:\n", "\t/* Don't let event modules unload while probe registered */\n", "\tret = try_module_get(file->event_call->mod);\n", "\tif (!ret) {\n", "\t\tret = -EBUSY;\n", "\t\tgoto out_free;\n", "\t}\n", "\n", "\tret = __ftrace_event_enable_disable(file, 1, 1);\n", "\tif (ret < 0)\n", "\t\tgoto skip_full_check;\n", "\n", "\tenv->explored_states = kcalloc(env->prog->len,\n", "\t\t\t\t sizeof(struct verifier_state_list *),\n", "\t\t\t\t GFP_USER);\n", "\tret = -ENOMEM;\n", "\tif (!env->explored_states)\n", "\t\tgoto skip_full_check;\n", "\n", "\tret = check_cfg(env);\n", "\tif (ret < 0)\n", "\t\tgoto out_put;\n", "\tret = cmd_ops->reg(glob, trigger_ops, trigger_data, file);\n", "\t/*\n", "\t * The above returns on success the # of functions enabled,\n", "\t * but if it didn't find any functions it returns zero.\n", "\t * Consider no functions a failure too.\n", "\t */\n", "\tif (!ret) {\n", "\t\tret = -ENOENT;\n", "\t\tgoto out_disable;\n", "\t} else if (ret < 0)\n", "\t\tgoto out_free;\n", "\n", " out_reg:\n", "\t/* Don't let event modules unload while probe registered */\n", "\tret = try_module_get(file->event_call->mod);\n", "\tif (!ret) {\n", "\t\tret = -EBUSY;\n", "\t\tgoto out_free;\n", "\t}\n", "\n", "\tret = trace_event_enable_disable(event_enable_file, 1, 1);\n", "\tif (ret < 0)\n", "\t\tgoto out_disable;\n", "\t/* Just return zero, not the number of enabled functions */\n", "\tret = 0;\n", " out:\n", "\tmutex_unlock(&ftrace_lock);\n", "\n", "\treturn ret;\n", "}\n", "\n", "#ifdef CONFIG_MODULES\n", "\n", "#define next_to_ftrace_page(p) container_of(p, struct ftrace_page, next)\n", "\n", "void ftrace_release_mod(struct module *mod)\n", "{\n", "\tstruct module_attribute *attr;\n", "\tint i;\n", "\n", "\tfor (i = KDB_MAXBPT - 1; i >= 0; i--)\n", "\t\tif (rdp->nxttail[i] == rdp->nxttail[RCU_DONE_TAIL])\n", "\t\t\t\trdp->nxttail[i] = rsp->orphan_donetail;\n", "\t\trsp->orphan_donelist = NULL;\n", "\t\trsp->orphan_donetail = &rsp->orphan_donelist;\n", "\t}\n", "\tif (rsp->orphan_nxtlist != NULL) {\n", "\t\t__call_rcu_nocb_enqueue(rdp, rsp->orphan_nxtlist,\n", "\t\t\t\t\trsp->orphan_nxttail, ql, qll, flags);\n", "\t\tql = qll = 0;\n", "\t\trsp->orphan_nxtlist = NULL;\n", "\t\trsp->orphan_nxttail = &rsp->orphan_nxtlist;\n", "\t}\n", "\treturn true;\n", "}\n", "\n", "/*\n", " * If necessary, kick off a new grace period if one is needed.\n", "\t */\n", "\trcu_report_qs_rsp(rsp, flags); /* releases rnp->lock. */\n", "}\n", "\n", "/*\n", " * Record a quiescent state for all tasks that were preempted within an RCU read-side critical\n", " * sections. This function also enables RCU lockdep checking.\n", " */\n", "void rcu_scheduler_starting(void)\n", "{\n", "\tWARN_ON(num_online_cpus() != 1);\n", "\tWARN_ON(nr_context_switches() > 0);\n", "\trcu_scheduler_active = 1;\n", "}\n", "\n", "/*\n", " * Compute the per-level fanout, either using the exact fanout specified\n", " * or balancing the tree, depending on CONFIG_RCU_FANOUT_EXACT.\n", " */\n", "static void __init rcu_init_levelspread(struct rcu_state *rsp)\n", "{\n", "\tint cpu;\n", "\tunsigned long sum = 0;\n", "\tunsigned long t;\n", "\n", "\tfor_each_possible_cpu(cpu) {\n", "\t\tt = ACCESS_ONCE(per_cpu_ptr(sp->per_cpu_ref, cpu)->c[1]);\n", "\t}\n", "\treturn sum;\n", "}\n", "\n", "/**\n", " * cleanup_srcu_struct - deconstruct a sleep-RCU structure\n", " * @sp: structure to clean up.\n", " *\n", " * Must invoke this after you are finished using a given srcu_struct that\n", " * was initialized via init_srcu_struct(), else you leak memory.\n", " */\n", "void cleanup_srcu_struct(struct srcu_struct *sp)\n", "{\n", "\tif (WARN_ON(srcu_readers_active(sp)))\n", "\t\treturn; /* Leakage unless caller handles error. */\n", "\tfree_percpu(sp->per_cpu_ref);\n", "\tsp->per_cpu_ref = NULL;\n", "}\n", "EXPORT_SYMBOL_GPL(cleanup_srcu_struct);\n", "\n", "/*\n", " * Counts the new reader in the appropriate per-CPU\n", " * element of the srcu_struct. Note that this may well be a different\n", " * CPU than that which was incremented by the corresponding srcu_read_lock().\n", " * Must be called from process context.\n", " *\n", " * Note that it is permissible to omit this call entirely, as is\n", " * done in architectures that do no CPU-hotplug error checking.\n", " */\n", "int cpu_check_up_prepare(int cpu)\n", "{\n", "\tif (!IS_ENABLED(CONFIG_HOTPLUG_CPU)) {\n", "\t\tatomic_set(&per_cpu(cpu_hotplug_state, cpu), CPU_UP_PREPARE);\n", "\t\treturn 0;\n", "\t}\n", "\n", "\tswitch (atomic_read(&per_cpu(cpu_hotplug_state, cpu)) == CPU_DEAD)\n", "\t\tgoto update_state;\n", "\tudelay(5);\n", "\n", "\t/* But if the outgoing CPU dawdles, wait increasingly long times. */\n", "\twhile (atomic_read(&per_cpu(cpu_hotplug_state, cpu)) != CPU_DEAD) {\n", "\t\tschedule_timeout_uninterruptible(sleep_jf);\n", "\t\tjf_left -= sleep_jf;\n", "\t\tif (jf_left <= 0)\n", "\t\t\tbreak;\n", "\t\tsleep_jf = DIV_ROUND_UP(sleep_jf * 11, 10);\n", "\t}\n", "update_state:\n", "\toldstate = atomic_read(&per_cpu(cpu_hotplug_state, cpu));\n", "\t\tif (oldstate != CPU_BROKEN)\n", "\t\t\tnewstate = CPU_DEAD;\n", "\t\telse\n", "\t\t\tnewstate = CPU_DEAD_FROZEN;\n", "\t} while (atomic_cmpxchg(&per_cpu(cpu_hotplug_state, cpu),\n", "\t\t\t\t oldstate, CPU_BROKEN) != oldstate)\n", "\t\t\tgoto update_state;\n", "\t\tret = false;\n", "\t}\n", "\treturn ret;\n", "}\n", "\n", "/*\n", " * Called by the outgoing CPU to report its successful death. Return\n", " * false if this report follows the surviving CPU's timing out.\n", " *\n", " * A separate \"CPU_DEAD_FROZEN\" is used when the surviving CPU\n", " * timed out. This approach allows architectures to omit calls to\n", " * cpu_check_up_prepare() and cpu_set_state_online() without defeating\n", " * the next cpu_wait_death()'s polling loop.\n", " */\n", "bool cpu_report_death(void)\n", "{\n", "\tint oldstate;\n", "\tint newstate;\n", "\tint cpu = smp_processor_id();\n", "\n", "\tdo {\n", "\t\toldstate = atomic_read(&per_cpu(cpu_hotplug_state, cpu));\n", "\tif (oldstate == CPU_DEAD) {\n", "\t\t/* Outgoing CPU died normally, update state. */\n", "\t\tsmp_mb(); /* atomic_read() before update. */\n", "\t\tatomic_set(&per_cpu(cpu_hotplug_state, cpu), CPU_POST_DEAD);\n", "\t} else {\n", "\t\t/* Outgoing CPU still hasn't gotten around\n", " * to dying. In the latter two cases, the CPU might not be set up\n", " * properly, but it is up to the arch-specific code:\n", "\t *\n", "\t * \tinsn -\tcopy_insn() saves the original instruction here for\n", "\t *\t\tarch_uprobe_analyze_insn().\n", "\t *\n", "\t *\tixol -\tpotentially modified instruction to execute out of\n", "\t *\t\tline, copied to xol_area by xol_get_insn_slot().\n", "\t */\n", "\tstruct arch_uprobe\tarch;\n", "};\n", "\n", "struct return_instance {\n", "\tstruct uprobe\t\t*uprobe;\n", "\tunsigned long\t\tfunc;\n", "\tunsigned long\t\tret_ip;\n", "};\n", "\n", "/*\n", " * trace_flag_type is an enumeration that defines bit\n", " * positions into trace_flags that controls the output.\n", " *\n", " * NOTE: These bits must match the trace_options array in\n", " * trace.c.\n", " */\n", "enum trace_iterator_flags {\n", "\tTRACE_ITER_PRINT_PARENT\t\t= 0x01,\n", "\tTRACE_ITER_SYM_OFFSET\t\t= 0x02,\n", "\tTRACE_ITER_SYM_ADDR\t\t= 0x04,\n", "\tTRACE_ITER_VERBOSE\t\t= 0x08,\n", "\tTRACE_ITER_RAW\t\t\t= 0x10,\n", "\tTRACE_ITER_HEX\t\t\t= 0x20,\n", "\tTRACE_ITER_BIN\t\t\t= 0x40,\n", "\tTRACE_ITER_BLOCK\t\t= 0x80,\n", "\tTRACE_ITER_STACKTRACE\t\t= 0x100,\n", "\tTRACE_ITER_PRINTK\t\t= 0x200,\n", "\tTRACE_ITER_PREEMPTONLY\t\t= 0x400,\n", "\tTRACE_ITER_BRANCH\t\t= 0x800,\n", "\tTRACE_ITER_ANNOTATE\t\t= 0x1000,\n", "\tTRACE_ITER_USERSTACKTRACE = 0x2000,\n", "\tTRACE_ITER_SYM_USEROBJ = 0x4000,\n", "\tTRACE_ITER_PRINTK_MSGONLY\t= 0x8000,\n", "\tTRACE_ITER_CONTEXT_INFO\t\t= 0x10000, /* Print pid/cpu/time */\n", "\tTRACE_ITER_LATENCY_FMT\t\t= 0x20000,\n", "\tTRACE_ITER_SLEEP_TIME\t\t= 0x40000,\n", "\tTRACE_ITER_GRAPH_TIME\t\t= 0x80000,\n", "\tTRACE_ITER_RECORD_CMD\t\t= 0x100000,\n", "\tTRACE_ITER_OVERWRITE\t\t= 0x200000,\n", "\tTRACE_ITER_STOP_ON_FREE\t\t= 0x400000,\n", "\tTRACE_ITER_IRQ_INFO\t\t= 0x800000,\n", "\tTRACE_ITER_MARKERS\t\t= 0x1000000,\n", "\tTRACE_ITER_FUNCTION\t\t= 0x2000000,\n", "};\n", "\n", "/*\n", " * TRACE_ITER_SYM_MASK masks the options in trace_flags that\n", " * control the output of kernel symbols.\n", " */\n", "#define TRACE_ITER_SYM_MASK \\\n", "\t(TRACE_ITER_PRINT_PARENT|TRACE_ITER_SYM_OFFSET|TRACE_ITER_SYM_ADDR)\n", "\n", "extern struct tracer nop_trace;\n", "\n", "#ifdef CONFIG_BRANCH_TRACER\n", "extern int enable_branch_tracing(struct trace_array *tr)\n", "{\n", "\tmutex_lock(&event_mutex);\n", "\n", "\t/* Disable any event triggers and associated soft-disabled events */\n", "\tclear_event_triggers(tr);\n", "\n", "\t/* Disable any running events */\n", "\t__ftrace_set_clr_event_nolock(tr, NULL, NULL, NULL, 0);\n", "\tif (WARN_ON_ONCE(ret)) {\n", "\t\tpr_warn(\"error on probing function return.\\n\");\n", "\t\twarn++;\n", "\t} else {\n", "\t\t/* Enable trace point */\n", "\t\ttk = find_trace_kprobe(\"testprobe\", KPROBE_EVENT_SYSTEM);\n", "\t\tif (WARN_ON_ONCE(tk == NULL)) {\n", "\t\t\tpr_warn(\"error on getting probe file.\\n\");\n", "\t\t\t\twarn++;\n", "\t\t\t} else\n", "\t\t\t\tenable_trace_kprobe(tk, file);\n", "\t\t}\n", "\t}\n", "\n", "\tif (warn)\n", "\t\tgoto end;\n", "\n", "\tret = target(1, 2, 3, 4, 5, 6);\n", "\n", "\t/* Disable trace points before removing it */\n", "\ttk = find_trace_kprobe(\"testprobe\", KPROBE_EVENT_SYSTEM);\n", "\tif (WARN_ON_ONCE(tk == NULL)) {\n", "\t\tpr_warn(\"error on getting 2nd new probe.\\n\");\n", "\t\t\twarn++;\n", "\t\t} else {\n", "\t\t\tfile = find_trace_probe_file(tk, top_trace_array());\n", "\t\t\tif (WARN_ON_ONCE(file == NULL)) {\n", "\t\t\t\tpr_warn(\"error on getting test probe.\\n\");\n", "\t\twarn++;\n", "\t} else {\n", "\t\tfile = find_trace_probe_file(tk, top_trace_array());\n", "\t\tif (WARN_ON_ONCE(file == NULL)) {\n", "\t\t\tpr_warn(\"error on getting new probe.\\n\");\n", "\t\t\twarn++;\n", "\t\t} else {\n", "\t\t\tfile = find_trace_probe_file(tk, top_trace_array());\n", "\t\t\tif (WARN_ON_ONCE(file == NULL)) {\n", "\t\t\t\tpr_warn(\"error on getting probe file.\\n\");\n", "\t\t\t\twarn++;\n", "\t\t\t} else\n", "\t\t\t\tenable_trace_kprobe(tk, file);\n", "\t\t}\n", "\t}\n", "\n", "\tif (warn)\n", "\t\tgoto end;\n", "\n", "\tret = target(1, 2, 3, 4, 5, 6);\n", "\n", "\t/* Disable trace points before removing it */\n", "\ttk = find_trace_kprobe(\"testprobe\", KPROBE_EVENT_SYSTEM);\n", "\tif (WARN_ON_ONCE(tk == NULL)) {\n", "\t\tpr_warn(\"error on getting 2nd test probe.\\n\");\n", "\t\twarn++;\n", "\t} else {\n", "\t\tfile = find_trace_probe_file(tk, top_trace_array());\n", "\t\tif (WARN_ON_ONCE(file == NULL)) {\n", "\t\t\tpr_warn(\"error on getting probe file.\\n\");\n", "\t\t\t\twarn++;\n", "\t\t\t} else\n", "\t\t\t\tenable_trace_kprobe(tk, file);\n", "\t\t}\n", "\t}\n", "\n", "\tret = traceprobe_command(\"r:testprobe2 kprobe_trace_selftest_target \"\n", "\t\t\t\t \"$retval\", create_trace_kprobe);\n", "\tif (WARN_ON_ONCE(ret)) {\n", "\t\tpr_warn(\"error on probing function entry.\\n\");\n", "\t\twarn++;\n", "\t} else {\n", "\t\t/* Enable trace point */\n", "\t\ttk = find_trace_kprobe(\"testprobe\", KPROBE_EVENT_SYSTEM);\n", "\t\tif (WARN_ON_ONCE(tk == NULL)) {\n", "\t\t\tpr_warn(\"error on getting 2nd new probe.\\n\");\n", "\t\t\twarn++;\n", "\t\t} else {\n", "\t\t\tfile = find_trace_probe_file(tk, top_trace_array());\n", "\t\t\tif (WARN_ON_ONCE(file == NULL)) {\n", "\t\t\t\tpr_warn(\"error on getting probe file.\\n\");\n", "\t\t\t\twarn++;\n", "\t\t\t} else\n", "\t\t\t\tenable_trace_kprobe(tk, file);\n", "\t\t}\n", "\t}\n", "\n", "\tif (warn)\n", "\t\tgoto end;\n", "\n", "\tret = target(1, 2, 3, 4, 5, 6);\n", "\n", "\t/* Disable trace points before removing it */\n", "\ttk = find_trace_kprobe(\"testprobe\", KPROBE_EVENT_SYSTEM);\n", "\tif (WARN_ON_ONCE(tk == NULL)) {\n", "\t\tpr_warn(\"error on getting probe file.\\n\");\n", "\t\t\twarn++;\n", "\t\t} else\n", "\t\t\tdisable_trace_kprobe(tk, file);\n", "\t}\n", "\n", "\ttk = find_trace_kprobe(\"testprobe2\", KPROBE_EVENT_SYSTEM);\n", "\tif (WARN_ON_ONCE(tk == NULL)) {\n", "\t\tpr_warn(\"error on getting probe file.\\n\");\n", "\t\t\t\twarn++;\n", "\t\t\t} else\n", "\t\t\t\tenable_trace_kprobe(tk, file);\n", "\t\t}\n", "\t}\n", "\n", "\tif (warn)\n", "\t\tgoto end;\n", "\n", "\tret = target(1, 2, 3, 4, 5, 6);\n", "\n", "\t/* Disable trace points before removing it */\n", "\ttk = find_trace_kprobe(\"testprobe\", KPROBE_EVENT_SYSTEM);\n", "\tif (WARN_ON_ONCE(tk == NULL)) {\n", "\t\tpr_warn(\"error on getting 2nd new probe.\\n\");\n", "\t\t\twarn++;\n", "\t\t} else {\n", "\t\t\tfile = find_trace_probe_file(tk, top_trace_array());\n", "\t\t\tif (WARN_ON_ONCE(file == NULL)) {\n", "\t\t\t\tpr_warn(\"error on getting 2nd test probe.\\n\");\n", "\t\twarn++;\n", "\t} else {\n", "\t\tfile = find_trace_probe_file(tk, top_trace_array());\n", "\t\tif (WARN_ON_ONCE(file == NULL)) {\n", "\t\t\tpr_warn(\"error on getting 2nd new probe.\\n\");\n", "\t\t\twarn++;\n", "\t\t} else {\n", "\t\t\tfile = find_trace_probe_file(tk, top_trace_array());\n", "\t\t\tif (WARN_ON_ONCE(file == NULL)) {\n", "\t\t\t\tpr_warn(\"error on getting 2nd test probe.\\n\");\n", "\t\twarn++;\n", "\t} else {\n", "\t\tfile = find_trace_probe_file(tk, top_trace_array());\n", "\t\tif (WARN_ON_ONCE(file == NULL)) {\n", "\t\t\tpr_warn(\"error on getting 2nd new probe.\\n\");\n", "\t\t\twarn++;\n", "\t\t} else {\n", "\t\t\tfile = find_trace_probe_file(tk, top_trace_array());\n", "\t\t\tif (WARN_ON_ONCE(file == NULL)) {\n", "\t\t\t\tpr_warn(\"error on getting probe file.\\n\");\n", "\t\t\t\twarn++;\n", "\t\t\t} else\n", "\t\t\t\tenable_trace_kprobe(tk, file);\n", "\t\t}\n", "\t}\n", "\n", "\tret = traceprobe_command(\"r:testprobe2 kprobe_trace_selftest_target \"\n", "\t\t\t\t \"$retval\", create_trace_kprobe);\n", "\tif (WARN_ON_ONCE(ret)) {\n", "\t\tpr_warn(\"error on probing function entry.\\n\");\n", "\t\twarn++;\n", "\t} else {\n", "\t\t/* Enable trace point */\n", "\t\ttk = find_trace_kprobe(\"testprobe2\", KPROBE_EVENT_SYSTEM);\n", "\t\tif (WARN_ON_ONCE(tk == NULL)) {\n", "\t\t\tpr_warn(\"error on getting 2nd new probe.\\n\");\n", "\t\t\twarn++;\n", "\t\t} else {\n", "\t\t\tfile = find_trace_probe_file(tk, top_trace_array());\n", "\t\t\tif (WARN_ON_ONCE(file == NULL)) {\n", "\t\t\t\tpr_warn\n" ] } ], "source": [ "gen(train_LM(tokenize(linux), 8), length=3000)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.15" } }, "nbformat": 4, "nbformat_minor": 4 }