10.7. Sequence to Sequence Learning¶ Open the notebook in Colab Open the notebook in Colab Open the notebook in Colab Open the notebook in SageMaker Studio Lab

As we have seen in Section 10.5, in machine translation both the input and output are a variable-length sequence. To address this type of problem, we have designed a general encoder-decoder architecture in Section 10.6. In this section, we will use two RNNs to design the encoder and the decoder of this architecture and apply it to sequence to sequence learning for machine translation .

Following the design principle of the encoder-decoder architecture, the RNN encoder can take a variable-length sequence as input and transforms it into a fixed-shape hidden state. In other words, information of the input (source) sequence is encoded in the hidden state of the RNN encoder. To generate the output sequence token by token, a separate RNN decoder can predict the next token based on what tokens have been seen (such as in language modeling) or generated, together with the encoded information of the input sequence. Fig. 10.7.1 illustrates how to use two RNNs for sequence to sequence learning in machine translation.

Fig. 10.7.1 Sequence to sequence learning with an RNN encoder and an RNN decoder.

In Fig. 10.7.1, the special “<eos>” token marks the end of the sequence. The model can stop making predictions once this token is generated. At the initial time step of the RNN decoder, there are two special design decisions. First, the special beginning-of-sequence “<bos>” token is an input. Second, the final hidden state of the RNN encoder is used to initiate the hidden state of the decoder. In designs such as , this is exactly how the encoded input sequence information is fed into the decoder for generating the output (target) sequence. In some other designs such as , the final hidden state of the encoder is also fed into the decoder as part of the inputs at every time step as shown in Fig. 10.7.1.

10.7.1. Teacher Forcing¶

While the encoder input is just tokens from the source sequence, the decoder input and output are not so straightforward in encoder-decoder training. A common approach is teacher forcing, where the original target sequence (token labels) is fed into the decoder as input. More concretely, the special beginning-of-sequence token and the original target sequence excluding the final token are concatenated as input to the decoder, while the decoder output (labels for training) is the original target sequence, shifted by one token: “<bos>”, “Ils”, “regardent”, “.” $$\rightarrow$$ “Ils”, “regardent”, “.”, “<eos>” (Fig. 10.7.1).

Our implementation in Section 10.5.3 prepared training data for teacher forcing, where shifting tokens for self-supervised learning is similar to the training of language models in Section 9.3. An alternative approach is to feed the predicted token from the previous time step as the current input to the decoder.

In the following, we will explain the design of Fig. 10.7.1 in greater detail. We will train this model for machine translation on the English-French dataset as introduced in Section 10.5.

import collections
import math
import torch
from torch import nn
from torch.nn import functional as F
from d2l import torch as d2l

import collections
import math
from mxnet import autograd, gluon, init, np, npx
from mxnet.gluon import nn, rnn
from d2l import mxnet as d2l

npx.set_np()

import collections
import math
import tensorflow as tf
from d2l import tensorflow as d2l


10.7.2. Encoder¶

Technically speaking, the encoder transforms an input sequence of variable length into a fixed-shape context variable $$\mathbf{c}$$, and encodes the input sequence information in this context variable. As depicted in Fig. 10.7.1, we can use an RNN to design the encoder.

Let’s consider a sequence example (batch size: 1). Suppose that the input sequence is $$x_1, \ldots, x_T$$, such that $$x_t$$ is the $$t^{\mathrm{th}}$$ token in the input text sequence. At time step $$t$$, the RNN transforms the input feature vector $$\mathbf{x}_t$$ for $$x_t$$ and the hidden state $$\mathbf{h} _{t-1}$$ from the previous time step into the current hidden state $$\mathbf{h}_t$$. We can use a function $$f$$ to express the transformation of the RNN’s recurrent layer:

(10.7.1)$\mathbf{h}_t = f(\mathbf{x}_t, \mathbf{h}_{t-1}).$

In general, the encoder transforms the hidden states at all the time steps into the context variable through a customized function $$q$$:

(10.7.2)$\mathbf{c} = q(\mathbf{h}_1, \ldots, \mathbf{h}_T).$

For example, when choosing $$q(\mathbf{h}_1, \ldots, \mathbf{h}_T) = \mathbf{h}_T$$ such as in Fig. 10.7.1, the context variable is just the hidden state $$\mathbf{h}_T$$ of the input sequence at the final time step.

So far we have used a unidirectional RNN to design the encoder, where a hidden state only depends on the input subsequence at and before the time step of the hidden state. We can also construct encoders using bidirectional RNNs. In this case, a hidden state depends on the subsequence before and after the time step (including the input at the current time step), which encodes the information of the entire sequence.

Now let’s implement the RNN encoder. Note that we use an embedding layer to obtain the feature vector for each token in the input sequence. The weight of an embedding layer is a matrix whose number of rows equals to the size of the input vocabulary (vocab_size) and number of columns equals to the feature vector’s dimension (embed_size). For any input token index $$i$$, the embedding layer fetches the $$i^{\mathrm{th}}$$ row (starting from 0) of the weight matrix to return its feature vector. Besides, here we choose a multilayer GRU to implement the encoder.

def init_seq2seq(module):  #@save
"""Initialize weights for Seq2Seq."""
if type(module) == nn.Linear:
nn.init.xavier_uniform_(module.weight)
if type(module) == nn.GRU:
for param in module._flat_weights_names:
if "weight" in param:
nn.init.xavier_uniform_(module._parameters[param])

class Seq2SeqEncoder(d2l.Encoder):  #@save
"""The RNN encoder for sequence to sequence learning."""
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers,
dropout=0):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embed_size)
self.rnn = d2l.GRU(embed_size, num_hiddens, num_layers, dropout)
self.apply(init_seq2seq)

def forward(self, X, *args):
# X shape: (batch_size, num_steps)
embs = self.embedding(X.t().type(torch.int64))
# embs shape: (num_steps, batch_size, embed_size)
output, state = self.rnn(embs)
# output shape: (num_steps, batch_size, num_hiddens)
# state shape: (num_layers, batch_size, num_hiddens)
return output, state

class Seq2SeqEncoder(d2l.Encoder):  #@save
"""The RNN encoder for sequence to sequence learning."""
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers,
dropout=0):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embed_size)
self.rnn = d2l.GRU(num_hiddens, num_layers, dropout)
self.initialize(init.Xavier())

def forward(self, X, *args):
# X shape: (batch_size, num_steps)
embs = self.embedding(d2l.transpose(X))
# embs shape: (num_steps, batch_size, embed_size)
output, state = self.rnn(embs)
# output shape: (num_steps, batch_size, num_hiddens)
# state shape: (num_layers, batch_size, num_hiddens)
return output, state

class Seq2SeqEncoder(d2l.Encoder):  #@save
"""The RNN encoder for sequence to sequence learning."""
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers,
dropout=0):
super().__init__()
self.embedding = tf.keras.layers.Embedding(vocab_size, embed_size)
self.rnn = d2l.GRU(num_hiddens, num_layers, dropout)

def call(self, X, *args):
# X shape: (batch_size, num_steps)
embs = self.embedding(tf.transpose(X))
# embs shape: (num_steps, batch_size, embed_size)
output, state = self.rnn(embs)
# output shape: (num_steps, batch_size, num_hiddens)
# state shape: (num_layers, batch_size, num_hiddens)
return output, state


The returned variables of recurrent layers have been explained in Section 9.6. Let’s still use a concrete example to illustrate the above encoder implementation. Below we instantiate a two-layer GRU encoder whose number of hidden units is 16. Given a minibatch of sequence inputs X (batch size: 4, number of time steps: 9), the hidden states of the last layer at all the time steps (outputs return by the encoder’s recurrent layers) are a tensor of shape (number of time steps, batch size, number of hidden units).

vocab_size, embed_size, num_hiddens, num_layers = 10, 8, 16, 2
batch_size, num_steps = 4, 9

encoder = Seq2SeqEncoder(vocab_size, embed_size, num_hiddens, num_layers)
X = torch.zeros((batch_size, num_steps))
outputs, state = encoder(X)

d2l.check_shape(outputs, (num_steps, batch_size, num_hiddens))

vocab_size, embed_size, num_hiddens, num_layers = 10, 8, 16, 2
batch_size, num_steps = 4, 9

encoder = Seq2SeqEncoder(vocab_size, embed_size, num_hiddens, num_layers)
X = np.zeros((batch_size, num_steps))
outputs, state = encoder(X)

d2l.check_shape(outputs, (num_steps, batch_size, num_hiddens))

vocab_size, embed_size, num_hiddens, num_layers = 10, 8, 16, 2
batch_size, num_steps = 4, 9

encoder = Seq2SeqEncoder(vocab_size, embed_size, num_hiddens, num_layers)
X = tf.zeros((batch_size, num_steps))
outputs, state = encoder(X)

d2l.check_shape(outputs, (num_steps, batch_size, num_hiddens))


Since a GRU is employed here, the shape of the multilayer hidden states at the final time step is (number of hidden layers, batch size, number of hidden units).

d2l.check_shape(state, (num_layers, batch_size, num_hiddens))

d2l.check_shape(state, (num_layers, batch_size, num_hiddens))

d2l.check_len(state, num_layers)
d2l.check_shape(state[0], (batch_size, num_hiddens))


10.7.3. Decoder¶

As we just mentioned, the context variable $$\mathbf{c}$$ of the encoder’s output encodes the entire input sequence $$x_1, \ldots, x_T$$. Given the output sequence $$y_1, y_2, \ldots, y_{T'}$$ from the training dataset, for each time step $$t'$$ (the symbol differs from the time step $$t$$ of input sequences or encoders), the probability of the decoder output $$y_{t'}$$ is conditional on the previous output subsequence $$y_1, \ldots, y_{t'-1}$$ and the context variable $$\mathbf{c}$$, i.e., $$P(y_{t'} \mid y_1, \ldots, y_{t'-1}, \mathbf{c})$$.

To model this conditional probability on sequences, we can use another RNN as the decoder. At any time step $$t^\prime$$ on the output sequence, the RNN takes the output $$y_{t^\prime-1}$$ from the previous time step and the context variable $$\mathbf{c}$$ as its input, then transforms them and the previous hidden state $$\mathbf{s}_{t^\prime-1}$$ into the hidden state $$\mathbf{s}_{t^\prime}$$ at the current time step. As a result, we can use a function $$g$$ to express the transformation of the decoder’s hidden layer:

(10.7.3)$\mathbf{s}_{t^\prime} = g(y_{t^\prime-1}, \mathbf{c}, \mathbf{s}_{t^\prime-1}).$

After obtaining the hidden state of the decoder, we can use an output layer and the softmax operation to compute the conditional probability distribution $$P(y_{t^\prime} \mid y_1, \ldots, y_{t^\prime-1}, \mathbf{c})$$ for the output at time step $$t^\prime$$.

Following Fig. 10.7.1, when implementing the decoder as follows, we directly use the hidden state at the final time step of the encoder to initialize the hidden state of the decoder. This requires that the RNN encoder and the RNN decoder have the same number of layers and hidden units. To further incorporate the encoded input sequence information, the context variable is concatenated with the decoder input at all the time steps. To predict the probability distribution of the output token, a fully connected layer is used to transform the hidden state at the final layer of the RNN decoder.

class Seq2SeqDecoder(d2l.Decoder):
"""The RNN decoder for sequence to sequence learning."""
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers,
dropout=0):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embed_size)
self.rnn = d2l.GRU(embed_size+num_hiddens, num_hiddens,
num_layers, dropout)
self.dense = nn.LazyLinear(vocab_size)
self.apply(init_seq2seq)

def init_state(self, enc_outputs, *args):
return enc_outputs[1]

def forward(self, X, enc_state):
# X shape: (batch_size, num_steps)
# embs shape: (num_steps, batch_size, embed_size)
embs = self.embedding(X.t().type(torch.int32))
# context shape: (batch_size, num_hiddens)
context = enc_state[-1]
# Broadcast context to (num_steps, batch_size, num_hiddens)
context = context.repeat(embs.shape[0], 1, 1)
# Concat at the feature dimension
embs_and_context = torch.cat((embs, context), -1)
outputs, state = self.rnn(embs_and_context, enc_state)
outputs = self.dense(outputs).swapaxes(0, 1)
# outputs shape: (batch_size, num_steps, vocab_size)
# state shape: (num_layers, batch_size, num_hiddens)
return outputs, state

class Seq2SeqDecoder(d2l.Decoder):
"""The RNN decoder for sequence to sequence learning."""
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers,
dropout=0):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embed_size)
self.rnn = d2l.GRU(num_hiddens, num_layers, dropout)
self.dense = nn.Dense(vocab_size, flatten=False)
self.initialize(init.Xavier())

def init_state(self, enc_outputs, *args):
return enc_outputs[1]

def forward(self, X, enc_state):
# X shape: (batch_size, num_steps)
# embs shape: (num_steps, batch_size, embed_size)
embs = self.embedding(d2l.transpose(X))
# context shape: (batch_size, num_hiddens)
context = enc_state[-1]
# Broadcast context to (num_steps, batch_size, num_hiddens)
context = np.tile(context, (embs.shape[0], 1, 1))
# Concat at the feature dimension
embs_and_context = np.concatenate((embs, context), -1)
outputs, state = self.rnn(embs_and_context, enc_state)
outputs = self.dense(outputs).swapaxes(0, 1)
# outputs shape: (batch_size, num_steps, vocab_size)
# state shape: (num_layers, batch_size, num_hiddens)
return outputs, state

class Seq2SeqDecoder(d2l.Decoder):
"""The RNN decoder for sequence to sequence learning."""
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers,
dropout=0):
super().__init__()
self.embedding = tf.keras.layers.Embedding(vocab_size, embed_size)
self.rnn = d2l.GRU(num_hiddens, num_layers, dropout)
self.dense = tf.keras.layers.Dense(vocab_size)

def init_state(self, enc_outputs, *args):
return enc_outputs[1]

def call(self, X, enc_state):
# X shape: (batch_size, num_steps)
# embs shape: (num_steps, batch_size, embed_size)
embs = self.embedding(tf.transpose(X))
# context shape: (batch_size, num_hiddens)
context = enc_state[-1]
# Broadcast context to (num_steps, batch_size, num_hiddens)
context = tf.tile(tf.expand_dims(context, 0), (embs.shape[0], 1, 1))
# Concat at the feature dimension
embs_and_context = tf.concat((embs, context), -1)
outputs, state = self.rnn(embs_and_context, enc_state)
outputs = tf.transpose(self.dense(outputs), (1, 0, 2))
# outputs shape: (batch_size, num_steps, vocab_size)
# state shape: (num_layers, batch_size, num_hiddens)
return outputs, state


To illustrate the implemented decoder, below we instantiate it with the same hyperparameters from the aforementioned encoder. As we can see, the output shape of the decoder becomes (batch size, number of time steps, vocabulary size), where the last dimension of the tensor stores the predicted token distribution.

decoder = Seq2SeqDecoder(vocab_size, embed_size, num_hiddens, num_layers)
state = decoder.init_state(encoder(X))
outputs, state = decoder(X, state)

d2l.check_shape(outputs, (batch_size, num_steps, vocab_size))
d2l.check_shape(state, (num_layers, batch_size, num_hiddens))

/home/d2l-worker/miniconda3/envs/d2l-en-release-1/lib/python3.8/site-packages/torch/nn/modules/lazy.py:178: UserWarning: Lazy modules are a new feature under heavy development so changes to the API or functionality can happen at any moment.
warnings.warn('Lazy modules are a new feature under heavy development '

decoder = Seq2SeqDecoder(vocab_size, embed_size, num_hiddens, num_layers)
state = decoder.init_state(encoder(X))
outputs, state = decoder(X, state)

d2l.check_shape(outputs, (batch_size, num_steps, vocab_size))
d2l.check_shape(state, (num_layers, batch_size, num_hiddens))

decoder = Seq2SeqDecoder(vocab_size, embed_size, num_hiddens, num_layers)
state = decoder.init_state(encoder(X))
outputs, state = decoder(X, state)

d2l.check_shape(outputs, (batch_size, num_steps, vocab_size))
d2l.check_len(state, num_layers)
d2l.check_shape(state[0], (batch_size, num_hiddens))


To summarize, the layers in the above RNN encoder-decoder model are illustrated in Fig. 10.7.2.

Fig. 10.7.2 Layers in an RNN encoder-decoder model.

10.7.4. Encoder-Decoder for Sequence to Sequence Learning¶

Based on the architecture described in Section 10.6, the RNN encoder-decoder model for sequence to sequence learning just puts the RNN encoder and the RNN decoder together.

class Seq2Seq(d2l.EncoderDecoder):  #@save
def __init__(self, encoder, decoder, tgt_pad, lr):
super().__init__(encoder, decoder)
self.save_hyperparameters()

def validation_step(self, batch):
Y_hat = self(*batch[:-1])
self.plot('loss', self.loss(Y_hat, batch[-1]), train=False)

def configure_optimizers(self):
# Adam optimizer is used here

class Seq2Seq(d2l.EncoderDecoder):  #@save
def __init__(self, encoder, decoder, tgt_pad, lr):
super().__init__(encoder, decoder)
self.save_hyperparameters()

def validation_step(self, batch):
Y_hat = self(*batch[:-1])
self.plot('loss', self.loss(Y_hat, batch[-1]), train=False)

def configure_optimizers(self):
# Adam optimizer is used here
{'learning_rate': self.lr})

class Seq2Seq(d2l.EncoderDecoder):  #@save
def __init__(self, encoder, decoder, tgt_pad, lr):
super().__init__(encoder, decoder)
self.save_hyperparameters()

def validation_step(self, batch):
Y_hat = self(*batch[:-1])
self.plot('loss', self.loss(Y_hat, batch[-1]), train=False)

def configure_optimizers(self):
# Adam optimizer is used here


At each time step, the decoder predicts a probability distribution for the output tokens. Similar to language modeling, we can apply softmax to obtain the distribution and calculate the cross-entropy loss for optimization. Recall Section 10.5 that the special padding tokens are appended to the end of sequences so sequences of varying lengths can be efficiently loaded in minibatches of the same shape. However, prediction of padding tokens should be excluded from loss calculations. To this end, we can mask irrelevant entries with zero values so that multiplication of any irrelevant prediction with zero equals to zero.

@d2l.add_to_class(Seq2Seq)
def loss(self, Y_hat, Y):
l = super(Seq2Seq, self).loss(Y_hat, Y, averaged=False)

@d2l.add_to_class(Seq2Seq)
def loss(self, Y_hat, Y):
l = super(Seq2Seq, self).loss(Y_hat, Y, averaged=False)

@d2l.add_to_class(Seq2Seq)
def loss(self, Y_hat, Y):
l = super(Seq2Seq, self).loss(Y_hat, Y, averaged=False)


10.7.6. Training¶

Now we can create and train an RNN encoder-decoder model for sequence to sequence learning on the machine translation dataset.

data = d2l.MTFraEng(batch_size=128)
embed_size, num_hiddens, num_layers, dropout = 256, 256, 2, 0.2
encoder = Seq2SeqEncoder(
len(data.src_vocab), embed_size, num_hiddens, num_layers, dropout)
decoder = Seq2SeqDecoder(
len(data.tgt_vocab), embed_size, num_hiddens, num_layers, dropout)
lr=0.001)
trainer.fit(model, data)

data = d2l.MTFraEng(batch_size=128)
embed_size, num_hiddens, num_layers, dropout = 256, 256, 2, 0.2
encoder = Seq2SeqEncoder(
len(data.src_vocab), embed_size, num_hiddens, num_layers, dropout)
decoder = Seq2SeqDecoder(
len(data.tgt_vocab), embed_size, num_hiddens, num_layers, dropout)
lr=0.001)
trainer.fit(model, data)

data = d2l.MTFraEng(batch_size=128)
embed_size, num_hiddens, num_layers, dropout = 256, 256, 2, 0.2
with d2l.try_gpu():
encoder = Seq2SeqEncoder(
len(data.src_vocab), embed_size, num_hiddens, num_layers, dropout)
decoder = Seq2SeqDecoder(
len(data.tgt_vocab), embed_size, num_hiddens, num_layers, dropout)
lr=0.001)
trainer.fit(model, data)


10.7.7. Prediction¶

To predict the output sequence token by token, at each decoder time step the predicted token from the previous time step is fed into the decoder as an input. Similar to training, at the initial time step the beginning-of-sequence (“<bos>”) token is fed into the decoder. This prediction process is illustrated in Fig. 10.7.3. When the end-of-sequence (“<eos>”) token is predicted, the prediction of the output sequence is complete.

Fig. 10.7.3 Predicting the output sequence token by token using an RNN encoder-decoder.

We will introduce different strategies for sequence generation in Section 10.8.

@d2l.add_to_class(d2l.EncoderDecoder)  #@save
def predict_step(self, batch, device, num_steps,
save_attention_weights=False):
batch = [a.to(device) for a in batch]
src, tgt, src_valid_len, _ = batch
enc_outputs = self.encoder(src, src_valid_len)
dec_state = self.decoder.init_state(enc_outputs, src_valid_len)
outputs, attention_weights = [tgt[:, (0)].unsqueeze(1), ], []
for _ in range(num_steps):
Y, dec_state = self.decoder(outputs[-1], dec_state)
outputs.append(Y.argmax(2))
# Save attention weights (to be covered later)
if save_attention_weights:
attention_weights.append(self.decoder.attention_weights)

@d2l.add_to_class(d2l.EncoderDecoder)  #@save
def predict_step(self, batch, device, num_steps,
save_attention_weights=False):
batch = [a.as_in_context(device) for a in batch]
src, tgt, src_valid_len, _ = batch
enc_outputs = self.encoder(src, src_valid_len)
dec_state = self.decoder.init_state(enc_outputs, src_valid_len)
outputs, attention_weights = [np.expand_dims(tgt[:,0], 1), ], []
for _ in range(num_steps):
Y, dec_state = self.decoder(outputs[-1], dec_state)
outputs.append(Y.argmax(2))
# Save attention weights (to be covered later)
if save_attention_weights:
attention_weights.append(self.decoder.attention_weights)
return np.concatenate(outputs[1:], 1), attention_weights

@d2l.add_to_class(d2l.EncoderDecoder)  #@save
def predict_step(self, batch, device, num_steps,
save_attention_weights=False):
src, tgt, src_valid_len, _ = batch
enc_outputs = self.encoder(src, src_valid_len, training=False)
dec_state = self.decoder.init_state(enc_outputs, src_valid_len)
outputs, attention_weights = [tf.expand_dims(tgt[:,0], 1), ], []
for _ in range(num_steps):
Y, dec_state = self.decoder(outputs[-1], dec_state, training=False)
outputs.append(tf.argmax(Y, 2))
# Save attention weights (to be covered later)
if save_attention_weights:
attention_weights.append(self.decoder.attention_weights)
return tf.concat(outputs[1:], 1), attention_weights


10.7.8. Evaluation of Predicted Sequences¶

We can evaluate a predicted sequence by comparing it with the label sequence (the ground-truth). BLEU (Bilingual Evaluation Understudy), though originally proposed for evaluating machine translation results , has been extensively used in measuring the quality of output sequences for different applications. In principle, for any $$n$$-grams in the predicted sequence, BLEU evaluates whether this $$n$$-grams appears in the label sequence.

Denote by $$p_n$$ the precision of $$n$$-grams, which is the ratio of the number of matched $$n$$-grams in the predicted and label sequences to the number of $$n$$-grams in the predicted sequence. To explain, given a label sequence $$A$$, $$B$$, $$C$$, $$D$$, $$E$$, $$F$$, and a predicted sequence $$A$$, $$B$$, $$B$$, $$C$$, $$D$$, we have $$p_1 = 4/5$$, $$p_2 = 3/4$$, $$p_3 = 1/3$$, and $$p_4 = 0$$. Besides, let $$\mathrm{len}_{\text{label}}$$ and $$\mathrm{len}_{\text{pred}}$$ be the numbers of tokens in the label sequence and the predicted sequence, respectively. Then, BLEU is defined as

(10.7.4)$\exp\left(\min\left(0, 1 - \frac{\mathrm{len}_{\text{label}}}{\mathrm{len}_{\text{pred}}}\right)\right) \prod_{n=1}^k p_n^{1/2^n},$

where $$k$$ is the longest $$n$$-grams for matching.

Based on the definition of BLEU in (10.7.4), whenever the predicted sequence is the same as the label sequence, BLEU is 1. Moreover, since matching longer $$n$$-grams is more difficult, BLEU assigns a greater weight to a longer $$n$$-gram precision. Specifically, when $$p_n$$ is fixed, $$p_n^{1/2^n}$$ increases as $$n$$ grows (the original paper uses $$p_n^{1/n}$$). Furthermore, since predicting shorter sequences tends to obtain a higher $$p_n$$ value, the coefficient before the multiplication term in (10.7.4) penalizes shorter predicted sequences. For example, when $$k=2$$, given the label sequence $$A$$, $$B$$, $$C$$, $$D$$, $$E$$, $$F$$ and the predicted sequence $$A$$, $$B$$, although $$p_1 = p_2 = 1$$, the penalty factor $$\exp(1-6/2) \approx 0.14$$ lowers the BLEU.

We implement the BLEU measure as follows.

def bleu(pred_seq, label_seq, k):  #@save
"""Compute the BLEU."""
pred_tokens, label_tokens = pred_seq.split(' '), label_seq.split(' ')
len_pred, len_label = len(pred_tokens), len(label_tokens)
score = math.exp(min(0, 1 - len_label / len_pred))
for n in range(1, min(k, len_pred) + 1):
num_matches, label_subs = 0, collections.defaultdict(int)
for i in range(len_label - n + 1):
label_subs[' '.join(label_tokens[i: i + n])] += 1
for i in range(len_pred - n + 1):
if label_subs[' '.join(pred_tokens[i: i + n])] > 0:
num_matches += 1
label_subs[' '.join(pred_tokens[i: i + n])] -= 1
score *= math.pow(num_matches / (len_pred - n + 1), math.pow(0.5, n))
return score


In the end, we use the trained RNN encoder-decoder to translate a few English sentences into French and compute the BLEU of the results.

engs = ['go .', 'i lost .', 'he\'s calm .', 'i\'m home .']
fras = ['va !', 'j\'ai perdu .', 'il est calme .', 'je suis chez moi .']
preds, _ = model.predict_step(
data.build(engs, fras), d2l.try_gpu(), data.num_steps)
for en, fr, p in zip(engs, fras, preds):
translation = []
for token in data.tgt_vocab.to_tokens(p):
if token == '<eos>':
break
translation.append(token)
print(f'{en} => {translation}, bleu,'
f'{bleu(" ".join(translation), fr, k=2):.3f}')

go . => ['va', 'te', 'faire', 'foutre', 'question', '!'], bleu,0.000
i lost . => ["j'ai", 'perdu', '.'], bleu,1.000
he's calm . => ['il', 'court', '.'], bleu,0.000
i'm home . => ['je', 'suis', 'chez', 'moi', '<unk>', '.'], bleu,0.803

engs = ['go .', 'i lost .', 'he\'s calm .', 'i\'m home .']
fras = ['va !', 'j\'ai perdu .', 'il est calme .', 'je suis chez moi .']
preds, _ = model.predict_step(
data.build(engs, fras), d2l.try_gpu(), data.num_steps)
for en, fr, p in zip(engs, fras, preds):
translation = []
for token in data.tgt_vocab.to_tokens(p):
if token == '<eos>':
break
translation.append(token)
print(f'{en} => {translation}, bleu,'
f'{bleu(" ".join(translation), fr, k=2):.3f}')

go . => ['<unk>', '.'], bleu,0.000
i lost . => ['je', 'suis', 'gras', '.'], bleu,0.000
he's calm . => ['il', 'court', '.'], bleu,0.000
i'm home . => ['je', 'suis', 'malade', '.'], bleu,0.512

engs = ['go .', 'i lost .', 'he\'s calm .', 'i\'m home .']
fras = ['va !', 'j\'ai perdu .', 'il est calme .', 'je suis chez moi .']
preds, _ = model.predict_step(
data.build(engs, fras), d2l.try_gpu(), data.num_steps)
for en, fr, p in zip(engs, fras, preds):
translation = []
for token in data.tgt_vocab.to_tokens(p):
if token == '<eos>':
break
translation.append(token)
print(f'{en} => {translation}, bleu,'
f'{bleu(" ".join(translation), fr, k=2):.3f}')

go . => ['<unk>', '!'], bleu,0.000
i lost . => ['je', '<unk>', '.'], bleu,0.000
he's calm . => ['nous', '<unk>', '!'], bleu,0.000
i'm home . => ['je', 'suis', '<unk>', '<unk>', '.'], bleu,0.548


10.7.9. Summary¶

• Following the design of the encoder-decoder architecture, we can use two RNNs to design a model for sequence to sequence learning.

• In encoder-decoder training, the teacher forcing approach feeds original output sequences (in contrast to predictions) into the decoder.

• When implementing the encoder and the decoder, we can use multilayer RNNs.

• We can use masks to filter out irrelevant computations, such as when calculating the loss.

• BLEU is a popular measure for evaluating output sequences by matching $$n$$-grams between the predicted sequence and the label sequence.

10.7.10. Exercises¶

1. Can you adjust the hyperparameters to improve the translation results?

2. Rerun the experiment without using masks in the loss calculation. What results do you observe? Why?

3. If the encoder and the decoder differ in the number of layers or the number of hidden units, how can we initialize the hidden state of the decoder?

4. In training, replace teacher forcing with feeding the prediction at the previous time step into the decoder. How does this influence the performance?

5. Rerun the experiment by replacing GRU with LSTM.

6. Are there any other ways to design the output layer of the decoder?