update seq2seq training

This commit is contained in:
ritchie46 2019-02-20 11:40:27 +01:00
parent 34a55eeadb
commit 47b5045ca9
5 changed files with 258 additions and 216 deletions

View File

@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 2,
"metadata": {},
"outputs": [
{
@ -20,15 +20,29 @@
"from torch import nn\n",
"import re\n",
"import os\n",
"from tensorboardX import SummaryWriter\n",
"\n",
"try:\n",
" from tensorboardX import SummaryWriter\n",
"except ModuleNotFoundError:\n",
" pass\n",
"\n",
"device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n",
"print(device)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Sequence to sequence learning\n",
"We will use pytorch to translate short sentences from French to English and vice versa\n",
"\n",
"![](img/hello-lead.png)"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
@ -39,7 +53,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"metadata": {},
"outputs": [
{
@ -66,14 +80,45 @@
" print(f.read(200))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Preparing the data I\n",
"\n",
"* Create a Language class that maps indexes to words and words to indexes\n",
"\n",
"**indexes to word**\n",
"```python\n",
"{0: SOS,\n",
" 1: EOS,\n",
" 2: The\n",
" ...\n",
" n: World\n",
"}\n",
"```\n",
"\n",
"**words to indexes**\n",
"```python\n",
"{SOS: 0,\n",
" EOS: 1,\n",
" The: 2\n",
" ...\n",
" World: n\n",
"}\n",
"```\n",
"\n",
"* implement functions to convert the letters to asscii and remove rare letters. (á, ò, ê -> a, o, e)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"\n",
"class Lang:\n",
"class Language:\n",
" \"\"\"\n",
" Utility class that serves as a language dictionary\n",
" \"\"\"\n",
@ -142,27 +187,31 @@
" # Reverse pairs, make Lang instances\n",
" if reverse:\n",
" pairs = [list(reversed(p)) for p in pairs]\n",
" input_lang = Lang(lang2)\n",
" output_lang = Lang(lang1)\n",
" input_lang = Language(lang2)\n",
" output_lang = Language(lang1)\n",
" else:\n",
" input_lang = Lang(lang1)\n",
" output_lang = Lang(lang2)\n",
" input_lang = Language(lang1)\n",
" output_lang = Language(lang2)\n",
"\n",
" return input_lang, output_lang, pairs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Preparing the data II\n",
"Since there are a lot of example sentences and we want to train something quickly, we'll trim the data set to only relatively short and simple sentences. \n",
"Here the maximum length is 10 words (that includes ending punctuation) and we're filtering to sentences that translate to the form \"I am\" or \"He is\" etc. \n",
"(accounting for apostrophes replaced earlier).\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"# Since there are a lot of example sentences and we want to train something quickly, we'll trim the data set to only relatively short and simple sentences. \n",
"# Here the maximum length is 10 words (that includes ending punctuation) and we're filtering to sentences that translate to the form \"I am\" or \"He is\" etc. \n",
"# (accounting for apostrophes replaced earlier).\n",
"\n",
"\n",
"\n",
"def filter_pairs(pairs):\n",
" MAX_LENGTH = 10\n",
" \n",
@ -183,37 +232,17 @@
]
},
{
"cell_type": "code",
"execution_count": 6,
"cell_type": "markdown",
"metadata": {},
"outputs": [],
"source": [
"class Data:\n",
" def __init__(self, pairs, lang_1, lang_2):\n",
" self.pairs = np.array(pairs)\n",
" np.random.seed(9)\n",
" np.random.shuffle(self.pairs)\n",
" idx_1 = [[lang_1.word2index[word] for word in s.split(' ')] \n",
" for s in self.pairs[:, 0]]\n",
" idx_2 = [[lang_2.word2index[word] for word in s.split(' ')]\n",
" for s in self.pairs[:, 1]]\n",
" self.idx_pairs = np.array(list(zip(idx_1, idx_2)))\n",
" self.shuffle_idx = np.arange(len(pairs))\n",
" \n",
" def __str__(self):\n",
" return(self.pairs)\n",
" \n",
" def shuffle(self):\n",
" np.random.shuffle(self.shuffle_idx)\n",
" self.pairs = self.pairs[self.shuffle_idx]\n",
" self.idx_pairs = self.idx_pairs[self.shuffle_idx] \n",
" \n",
" "
"# Preparing the data III\n",
"\n",
"Read the data from the text files, normalize the sentences, create the Language instances from the Language class and wrap the two languages in a Data class so we can shuffle the sentences and query them later."
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 10,
"metadata": {},
"outputs": [
{
@ -235,14 +264,34 @@
"array(['we are even EOS', 'nous sommes a egalite EOS'], dtype='<U60')"
]
},
"execution_count": 7,
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"class Data:\n",
" def __init__(self, pairs, lang_1, lang_2):\n",
" self.pairs = np.array(pairs)\n",
" np.random.seed(9)\n",
" np.random.shuffle(self.pairs)\n",
" idx_1 = [[lang_1.word2index[word] for word in s.split(' ')] \n",
" for s in self.pairs[:, 0]]\n",
" idx_2 = [[lang_2.word2index[word] for word in s.split(' ')]\n",
" for s in self.pairs[:, 1]]\n",
" self.idx_pairs = np.array(list(zip(idx_1, idx_2)))\n",
" self.shuffle_idx = np.arange(len(pairs))\n",
" \n",
" def __str__(self):\n",
" return(self.pairs)\n",
" \n",
" def shuffle(self):\n",
" np.random.shuffle(self.shuffle_idx)\n",
" self.pairs = self.pairs[self.shuffle_idx]\n",
" self.idx_pairs = self.idx_pairs[self.shuffle_idx] \n",
" \n",
"def prepare_data(lang1, lang2, reverse=False):\n",
" # read_langs initialized the Lang objects (still empty) and returns the pair sentences.\n",
" # read_langs initialized the Language objects (still empty) and returns the pair sentences.\n",
" input_lang, output_lang, pairs = read_langs(lang1, lang2, reverse)\n",
" print(\"Read %s sentence pairs\" % len(pairs))\n",
" \n",
@ -269,7 +318,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The Encoder\n",
"## Sequence to sequence model\n",
"\n",
"![](img/seq2seq.png)\n",
"\n",
"## The Encoder\n",
"\n",
"The encoder of a seq2seq network is a RNN that outputs some value for every word from the input sentence. For every input word the encoder outputs a vector and a hidden state, and uses the hidden state for the next input word.\n",
"\n",
@ -282,7 +335,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 12,
"metadata": {},
"outputs": [
{
@ -291,7 +344,7 @@
"torch.Size([5, 1, 2])"
]
},
"execution_count": 8,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
@ -324,9 +377,9 @@
" return x, h\n",
" \n",
"\n",
"m = Encoder(eng.n_words, 10, 2, False, 'cpu')\n",
"scentence = torch.tensor([400, 1, 2, 6, 8])\n",
"a = m(scentence)\n",
"m = Encoder(eng.n_words, 10, 2, False, device)\n",
"sentence = torch.tensor([400, 1, 2, 6, 8], device=device)\n",
"a = m(sentence)\n",
"a[0].shape"
]
},
@ -340,22 +393,22 @@
"\n",
"At every step of decoding, the decoder is given an input token and hidden state. The initial input token is the start-of-string <SOS> token, and the first hidden state is the context vector (the encoders last hidden state).\n",
" \n",
"![](img/decoder-network.png)\n",
"![](img/decoder-network-adapted.png)\n",
" "
]
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 13,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"tensor(-23351.4941, grad_fn=<SumBackward0>)"
"tensor(-23347.4961, grad_fn=<SumBackward0>)"
]
},
"execution_count": 9,
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
@ -371,7 +424,7 @@
" self.relu = nn.LeakyReLU()\n",
" self.rnn = nn.GRU(embedding_size, hidden_size)\n",
" self.out = nn.Sequential(\n",
" nn.LeakyReLU(),\n",
"# nn.ReLU(),\n",
" nn.Linear(hidden_size, output_size),\n",
" nn.LogSoftmax(2)\n",
" )\n",
@ -400,12 +453,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"![](img/attention-decoder-network.png)"
"![](img/attention-decoder-network-adapted.png)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 24,
"metadata": {},
"outputs": [
{
@ -421,7 +474,7 @@
"torch.Size([1, 1, 2])"
]
},
"execution_count": 10,
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
@ -435,7 +488,7 @@
" self.device = device\n",
" self.embedding = nn.Sequential(\n",
" nn.Embedding(output_size, embedding_size),\n",
" nn.Dropout(dropout)\n",
"# nn.Dropout(dropout)\n",
" )\n",
" \n",
" # Seperate neural network to learn the attention weights\n",
@ -487,22 +540,22 @@
"hidden_size = 256\n",
"max_length = 10\n",
"\n",
"m = Encoder(eng.n_words, embedding_size, hidden_size, bidirectional=False, device='cpu')\n",
"scentence = torch.tensor([1, 23, 9])\n",
"out, h = m(scentence)\n",
"m = Encoder(eng.n_words, embedding_size, hidden_size, bidirectional=False, device=device)\n",
"sentence = torch.tensor([1, 23, 9], device=device)\n",
"out, h = m(sentence)\n",
"print(out.shape)\n",
"\n",
"encoder_outputs = torch.zeros(max_length, out.shape[-1], device='cpu')\n",
"encoder_outputs = torch.zeros(max_length, out.shape[-1], device=device)\n",
"encoder_outputs[:out.shape[0], :out.shape[-1]] = out.view(out.shape[0], -1)\n",
"\n",
"\n",
"m = AttentionDecoder(embedding_size, hidden_size, 2, device='cpu')\n",
"m(torch.tensor([1]), h, encoder_outputs)[0].shape"
"m = AttentionDecoder(embedding_size, hidden_size, 2, device=device)\n",
"m(torch.tensor([1], device=device), h, encoder_outputs)[0].shape"
]
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
@ -527,10 +580,11 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 25,
"metadata": {},
"outputs": [],
"source": [
"epochs = 10\n",
"teacher_forcing_ratio = 0.5\n",
"\n",
"embedding_size = 100\n",
@ -539,12 +593,14 @@
"encoder = Encoder(eng.n_words, embedding_size, context_vector_size, bidirectional)\n",
"context_vector_size = context_vector_size * 2 if bidirectional else context_vector_size \n",
"decoder = AttentionDecoder(embedding_size, context_vector_size, fra.n_words)\n",
"writer = SummaryWriter('tb/train3')"
"\n",
"if 'SummaryWriter' in globals():\n",
" writer = SummaryWriter('tb/train-3')"
]
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 26,
"metadata": {},
"outputs": [
{
@ -553,7 +609,7 @@
"'cuda'"
]
},
"execution_count": 13,
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
@ -564,7 +620,7 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 27,
"metadata": {},
"outputs": [
{
@ -574,7 +630,13 @@
"epoch 0\n",
"epoch 1\n",
"epoch 2\n",
"epoch 3\n"
"epoch 3\n",
"epoch 4\n",
"epoch 5\n",
"epoch 6\n",
"epoch 7\n",
"epoch 8\n",
"epoch 9\n"
]
}
],
@ -583,10 +645,7 @@
"def train(encoder, decoder):\n",
" criterion = nn.NLLLoss()\n",
" optim_encoder = torch.optim.SGD(encoder.parameters(), lr=0.01)\n",
" optim_decoder = torch.optim.SGD(decoder.parameters(), lr=0.01)\n",
"\n",
" epochs = 4\n",
" batch_size = 1\n",
" optim_decoder = torch.optim.SGD(decoder.parameters(), lr=0.01) \n",
"\n",
" encoder.train(True)\n",
" decoder.train(True)\n",
@ -614,7 +673,8 @@
" loss = run_decoder(decoder, criterion, fra_sentence, h, teacher_forcing, encoder_outputs)\n",
"\n",
" loss.backward()\n",
" writer.add_scalar('loss', loss.cpu().item() / (len(fra_sentence)))\n",
" if 'SummaryWriter' in globals():\n",
" writer.add_scalar('loss', loss.cpu().item() / (len(fra_sentence)))\n",
"\n",
" optim_decoder.step()\n",
" optim_encoder.step()\n",
@ -626,260 +686,251 @@
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
"m = AttentionDecoder"
]
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 28,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"English scentence:\t we re all single\n",
"French scentence:\t nous sommes toutes celibataires\n",
"English scentence:\t i am playing video games\n",
"French scentence:\t je joue a des jeux video\n",
"\n",
"Model translation:\t nous sommes toutes celibataire \n",
"Model translation:\t je joue a a ce \n",
"\n",
"\n",
"English scentence:\t he is amusing himself by playing video games\n",
"French scentence:\t il s amuse en jouant aux jeux videos\n",
"English scentence:\t they re looking for you\n",
"French scentence:\t ils te cherchent\n",
"\n",
"Model translation:\t il est plein a des anglais \n",
"Model translation:\t ils sont en train de \n",
"\n",
"\n",
"English scentence:\t i m an artist\n",
"French scentence:\t je suis une artiste\n",
"English scentence:\t i m sure that s wrong\n",
"French scentence:\t je suis sur que que c est mal\n",
"\n",
"Model translation:\t je suis un artiste \n",
"Model translation:\t je suis sur que c est \n",
"\n",
"\n",
"English scentence:\t he s enjoying himself\n",
"French scentence:\t il s amuse\n",
"English scentence:\t i m about to go out\n",
"French scentence:\t je vais sortir\n",
"\n",
"Model translation:\t il est est a \n",
"Model translation:\t je vais sortir de sortir \n",
"\n",
"\n",
"English scentence:\t he is an american\n",
"French scentence:\t il est americain\n",
"English scentence:\t he is on night duty tonight\n",
"French scentence:\t il travaille de nuit ce soir\n",
"\n",
"Model translation:\t il est un \n",
"Model translation:\t il est au soir sur ce \n",
"\n",
"\n",
"English scentence:\t you re the oldest\n",
"French scentence:\t tu es le plus vieux\n",
"English scentence:\t i m not saying anything\n",
"French scentence:\t je ne dis rien\n",
"\n",
"Model translation:\t vous etes le plus \n",
"Model translation:\t je ne suis pas du \n",
"\n",
"\n",
"English scentence:\t he s about to leave\n",
"French scentence:\t il est sur le point de partir\n",
"English scentence:\t we re not lost\n",
"French scentence:\t nous ne sommes pas perdues\n",
"\n",
"Model translation:\t il est sur le point \n",
"Model translation:\t nous ne sommes pas \n",
"\n",
"\n",
"English scentence:\t i m really happy\n",
"French scentence:\t je suis vraiment heureuse\n",
"English scentence:\t you re very funny\n",
"French scentence:\t tu es tres drole\n",
"\n",
"Model translation:\t je suis vraiment contente \n",
"Model translation:\t tu es tres droles \n",
"\n",
"\n",
"English scentence:\t we re all hungry\n",
"French scentence:\t nous avons tous faim\n",
"English scentence:\t i m writing a novel\n",
"French scentence:\t j ecris un roman\n",
"\n",
"Model translation:\t nous avons toutes faim \n",
"Model translation:\t j ecris en roman \n",
"\n",
"\n",
"English scentence:\t i m not giving you any more money\n",
"French scentence:\t je ne te donnerai pas davantage d argent\n",
"English scentence:\t we re all busy\n",
"French scentence:\t nous sommes toutes occupees\n",
"\n",
"Model translation:\t je ne te donnerai d argent d argent \n",
"Model translation:\t nous sommes tous occupes \n",
"\n",
"\n",
"English scentence:\t he is independent of his parents\n",
"French scentence:\t il est independant de ses parents\n",
"English scentence:\t you re fired\n",
"French scentence:\t tu es vire\n",
"\n",
"Model translation:\t il est plein de ses parents \n",
"Model translation:\t tu es licencie \n",
"\n",
"\n",
"English scentence:\t i am not a doctor but a teacher\n",
"French scentence:\t je ne suis pas medecin mais professeur\n",
"English scentence:\t he is likely to win the game\n",
"French scentence:\t il est probable qu il remporte la partie\n",
"\n",
"Model translation:\t je ne suis pas medecin mais enseignant \n",
"Model translation:\t il a des chances de remporter le \n",
"\n",
"\n",
"English scentence:\t you are responsible for the result\n",
"French scentence:\t vous etes responsable des resultats\n",
"English scentence:\t i m afraid for his life\n",
"French scentence:\t je crains pour sa vie\n",
"\n",
"Model translation:\t vous es responsable de l \n",
"Model translation:\t je crains pour sa vie sa \n",
"\n",
"\n",
"English scentence:\t i m sympathetic\n",
"French scentence:\t j eprouve de la compassion\n",
"English scentence:\t we re going to work tonight\n",
"French scentence:\t nous allons travailler ce soir\n",
"\n",
"Model translation:\t je ne suis \n",
"Model translation:\t nous allons travailler ce soir \n",
"\n",
"\n",
"English scentence:\t you are drunk\n",
"French scentence:\t tu es saoul\n",
"English scentence:\t she is not quite content\n",
"French scentence:\t elle n est pas tout a fait satisfaite\n",
"\n",
"Model translation:\t vous etes impressionnee \n",
"Model translation:\t elle n est pas tout \n",
"\n",
"\n",
"English scentence:\t you aren t as short as i am\n",
"French scentence:\t tu n es pas aussi petite que moi\n",
"English scentence:\t i m a tv addict\n",
"French scentence:\t je suis accro a la tele\n",
"\n",
"Model translation:\t vous n etes pas aussi petit que moi \n",
"Model translation:\t je suis un a la \n",
"\n",
"\n",
"English scentence:\t i m resilient\n",
"French scentence:\t je suis endurante\n",
"English scentence:\t we re not so sure\n",
"French scentence:\t nous n en sommes pas si sures\n",
"\n",
"Model translation:\t je suis endurant \n",
"Model translation:\t nous n en sommes pas \n",
"\n",
"\n",
"English scentence:\t you re wrong again\n",
"French scentence:\t vous avez a nouveau tort\n",
"English scentence:\t she stopped talking\n",
"French scentence:\t elle arreta de parler\n",
"\n",
"Model translation:\t tu as tort \n",
"Model translation:\t elle a arrete \n",
"\n",
"\n",
"English scentence:\t i am thinking of my vacation\n",
"French scentence:\t je pense a mes vacances\n",
"English scentence:\t i m out of ammo\n",
"French scentence:\t je suis a court de munitions\n",
"\n",
"Model translation:\t je songe de mes enfants \n",
"Model translation:\t je suis a court de \n",
"\n",
"\n",
"English scentence:\t she is able to sing very well\n",
"French scentence:\t elle sait tres bien chanter\n",
"English scentence:\t i m sorry i don t recognize you\n",
"French scentence:\t je suis desolee je ne te reconnais pas\n",
"\n",
"Model translation:\t elle va bien bien bien bien \n",
"Model translation:\t je suis desole je vous remets pas \n",
"\n",
"\n",
"English scentence:\t i m lazy\n",
"French scentence:\t je suis faineant\n",
"English scentence:\t you re very brave\n",
"French scentence:\t tu es tres brave\n",
"\n",
"Model translation:\t je suis paresseux \n",
"Model translation:\t vous etes fort brave \n",
"\n",
"\n",
"English scentence:\t he is a cruel person\n",
"French scentence:\t c est un homme cruel\n",
"English scentence:\t she is older and wiser now\n",
"French scentence:\t elle est plus agee et plus sage maintenant\n",
"\n",
"Model translation:\t il est une personne \n",
"Model translation:\t elle est plus et et plus \n",
"\n",
"\n",
"English scentence:\t he is a poet\n",
"French scentence:\t c est un poete\n",
"English scentence:\t you re rude\n",
"French scentence:\t vous etes grossieres\n",
"\n",
"Model translation:\t il est un \n",
"Model translation:\t vous etes grossiers \n",
"\n",
"\n",
"English scentence:\t we re plastered\n",
"French scentence:\t nous sommes bourres\n",
"English scentence:\t she s a jealous woman\n",
"French scentence:\t c est une femme jalouse\n",
"\n",
"Model translation:\t nous sommes bourrees \n",
"Model translation:\t c est une femme jalouse \n",
"\n",
"\n",
"English scentence:\t she suddenly fell silent\n",
"French scentence:\t elle se tut soudain\n",
"English scentence:\t i m going to reconsider it\n",
"French scentence:\t je vais y repenser encore une fois\n",
"\n",
"Model translation:\t elle est deux deux \n",
"Model translation:\t je vais y aller \n",
"\n",
"\n",
"English scentence:\t you re not making this easy\n",
"French scentence:\t vous ne rendez pas ca facile\n",
"English scentence:\t i m not happy about it\n",
"French scentence:\t je n en suis pas contente\n",
"\n",
"Model translation:\t vous ne fais pas ca \n",
"Model translation:\t je n en suis pas heureux \n",
"\n",
"\n",
"English scentence:\t she s very afraid of dogs\n",
"French scentence:\t elle a une peur bleue des chiens\n",
"English scentence:\t you re not the first\n",
"French scentence:\t vous n etes pas le premier\n",
"\n",
"Model translation:\t elle a tres peur des chiens \n",
"Model translation:\t tu n es pas le \n",
"\n",
"\n",
"English scentence:\t she is living abroad\n",
"French scentence:\t elle vit actuellement a l etranger\n",
"English scentence:\t i m just following orders\n",
"French scentence:\t je ne fais qu obeir aux ordres\n",
"\n",
"Model translation:\t elle est l a \n",
"Model translation:\t je suis fais d \n",
"\n",
"\n",
"English scentence:\t i m going to my grandmother s\n",
"French scentence:\t je vais chez ma grand mere\n",
"English scentence:\t i m on crutches for the next month\n",
"French scentence:\t je suis en bequilles pour un mois\n",
"\n",
"Model translation:\t je vais me mon de la \n",
"Model translation:\t je suis en bequilles pour le mois \n",
"\n",
"\n",
"English scentence:\t they re not always right\n",
"French scentence:\t elles n ont pas toujours raison\n",
"English scentence:\t i m in charge of security\n",
"French scentence:\t je suis responsable de la securite\n",
"\n",
"Model translation:\t elles n ont pas toujours \n",
"Model translation:\t je suis la la la la \n",
"\n",
"\n",
"English scentence:\t you are very attractive in blue\n",
"French scentence:\t le bleu vous va tres bien\n",
"English scentence:\t she s young enough to be your daughter\n",
"French scentence:\t elle est assez jeune pour etre ta fille\n",
"\n",
"Model translation:\t vous avez vraiment beaucoup a \n",
"Model translation:\t elle est assez jeune pour etre ta fille \n",
"\n",
"\n",
"English scentence:\t you re winning aren t you\n",
"French scentence:\t vous gagnez n est ce pas\n",
"English scentence:\t you re a jolly good feller\n",
"French scentence:\t t es un joyeux drille\n",
"\n",
"Model translation:\t vous etes est n est ce \n",
"Model translation:\t t es un sacre \n",
"\n",
"\n",
"English scentence:\t i m ticklish\n",
"French scentence:\t je suis chatouilleuse\n",
"English scentence:\t he s a bit rough around the edges\n",
"French scentence:\t il est un peu rugueux\n",
"\n",
"Model translation:\t je suis mal \n",
"Model translation:\t il est un peu peu \n",
"\n",
"\n",
"English scentence:\t i m not going to name names\n",
"French scentence:\t je ne vais pas citer de noms\n",
"English scentence:\t we re giving up\n",
"French scentence:\t nous abandonnons\n",
"\n",
"Model translation:\t je ne vais pas m a \n",
"Model translation:\t nous abandonnons \n",
"\n",
"\n",
"English scentence:\t they are at broadway avenue\n",
"French scentence:\t ils sont au broadway avenue\n",
"English scentence:\t they are very big\n",
"French scentence:\t ils sont tres grands\n",
"\n",
"Model translation:\t ils sont a l des \n",
"Model translation:\t ils sont tres gros \n",
"\n",
"\n",
"English scentence:\t i m sure\n",
"French scentence:\t j en suis sure\n",
"English scentence:\t i m puzzled\n",
"French scentence:\t je suis perplexe\n",
"\n",
"Model translation:\t je suis certain \n",
"Model translation:\t je suis en \n",
"\n",
"\n",
"English scentence:\t she said that she was happy\n",
"French scentence:\t elle a dit qu elle etait heureuse\n",
"English scentence:\t he is no longer a child\n",
"French scentence:\t ce n est plus un enfant\n",
"\n",
"Model translation:\t elle dit qu a heureuse \n",
"Model translation:\t ce n est plus un enfant \n",
"\n",
"\n",
"English scentence:\t i m all done\n",
"French scentence:\t j ai tout fini\n",
"English scentence:\t we re up early\n",
"French scentence:\t nous sommes debout tot\n",
"\n",
"Model translation:\t je ai tout termine \n",
"Model translation:\t nous sommes tot tot \n",
"\n",
"\n",
"English scentence:\t we re the problem\n",
"French scentence:\t nous sommes le probleme\n",
"English scentence:\t i am going to be fourteen\n",
"French scentence:\t je vais avoir quatorze ans\n",
"\n",
"Model translation:\t nous sommes le probleme \n",
"Model translation:\t je vais a tout \n",
"\n",
"\n",
"English scentence:\t i m being serious\n",
"French scentence:\t je suis serieux\n",
"English scentence:\t i m sorry i missed your birthday\n",
"French scentence:\t je suis desolee d avoir rate ton anniversaire\n",
"\n",
"Model translation:\t je suis serieux \n",
"Model translation:\t je suis desole d avoir rate votre \n",
"\n",
"\n"
]
@ -917,15 +968,6 @@
" \n",
"translate(20, 60)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"decoder"
]
}
],
"metadata": {

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

BIN
seq2seq/img/hello-lead.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.5 KiB

BIN
seq2seq/img/seq2seq.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB