使用 PyTorch 的 Seq2seq(序列到序列)模型
什么是NLP?
NLP 或自然语言处理是人工智能的热门分支之一,它可以帮助计算机理解、操纵或用自然语言回应人类。NLP 是人工智能背后的引擎 Google Translate 这有助于我们理解其他语言。
什么是 Seq2Seq?
序列2 是一种基于编码器-解码器的机器翻译和语言处理方法,它将序列的输入映射到具有标签和注意值的序列的输出。其思想是使用 2 个 RNN,它们将与一个特殊标记一起工作,并尝试根据前一个序列预测下一个状态序列。
如何根据前一个序列预测下一个序列
以下是使用 PyTorch 根据先前的序列预测序列的步骤。
步骤 1)加载数据
对于我们的数据集,你将使用来自 制表符分隔的双语句子对。这里我将使用英语到印尼语的数据集。你可以选择任何你喜欢的,但记得在代码中更改文件名和目录。
from __future__ import unicode_literals, print_function, division import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F import numpy as np import pandas as pd import os import re import random device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
步骤2)数据准备
您不能直接使用数据集。您需要将句子拆分成单词并将其转换为 One-Hot Vector。每个单词将在 Lang 类中被唯一索引以制作词典。Lang 类将存储每个句子并使用 addSentence 逐字拆分。然后通过索引每个未知单词为 Sequence to serial 模型创建词典。
SOS_token = 0 EOS_token = 1 MAX_LENGTH = 20 #initialize Lang Class class Lang: def __init__(self): #initialize containers to hold the words and corresponding index self.word2index = {} self.word2count = {} self.index2word = {0: "SOS", 1: "EOS"} self.n_words = 2 # Count SOS and EOS #split a sentence into words and add it to the container def addSentence(self, sentence): for word in sentence.split(' '): self.addWord(word) #If the word is not in the container, the word will be added to it, #else, update the word counter def addWord(self, word): if word not in self.word2index: self.word2index[word] = self.n_words self.word2count[word] = 1 self.index2word[self.n_words] = word self.n_words += 1 else: self.word2count[word] += 1
Lang 类是一个帮助我们制作词典的类。对于每种语言,每个句子都会被拆分成单词,然后添加到容器中。每个容器都会将单词存储在适当的索引中,对单词进行计数,并添加单词的索引,以便我们可以使用它来查找单词的索引或从其索引中查找单词。
因为我们的数据是用 TAB 分隔的,所以你需要使用 大熊猫 作为我们的数据加载器。Pandas 将把我们的数据读取为 dataFrame,并将其拆分为源句子和目标句子。对于您拥有的每个句子,
- 你需要将其标准化为小写,
- 删除所有非字符
- 从 Unicode 转换为 ASCII
- 拆分句子,以便获得其中的每个单词。
#Normalize every sentence def normalize_sentence(df, lang): sentence = df[lang].str.lower() sentence = sentence.str.replace('[^A-Za-z\s]+', '') sentence = sentence.str.normalize('NFD') sentence = sentence.str.encode('ascii', errors='ignore').str.decode('utf-8') return sentence def read_sentence(df, lang1, lang2): sentence1 = normalize_sentence(df, lang1) sentence2 = normalize_sentence(df, lang2) return sentence1, sentence2 def read_file(loc, lang1, lang2): df = pd.read_csv(loc, delimiter='\t', header=None, names=[lang1, lang2]) return df def process_data(lang1,lang2): df = read_file('text/%s-%s.txt' % (lang1, lang2), lang1, lang2) print("Read %s sentence pairs" % len(df)) sentence1, sentence2 = read_sentence(df, lang1, lang2) source = Lang() target = Lang() pairs = [] for i in range(len(df)): if len(sentence1[i].split(' ')) < MAX_LENGTH and len(sentence2[i].split(' ')) < MAX_LENGTH: full = [sentence1[i], sentence2[i]] source.addSentence(sentence1[i]) target.addSentence(sentence2[i]) pairs.append(full) return source, target, pairs
您将使用的另一个有用的函数是将对转换为张量。这非常重要,因为我们的网络只读取张量类型的数据。这也很重要,因为这部分在每个句子的末尾都会有一个标记来告诉网络输入已完成。对于句子中的每个单词,它将从词典中的相应单词中获取索引,并在句子末尾添加一个标记。
def indexesFromSentence(lang, sentence): return [lang.word2index[word] for word in sentence.split(' ')] def tensorFromSentence(lang, sentence): indexes = indexesFromSentence(lang, sentence) indexes.append(EOS_token) return torch.tensor(indexes, dtype=torch.long, device=device).view(-1, 1) def tensorsFromPair(input_lang, output_lang, pair): input_tensor = tensorFromSentence(input_lang, pair[0]) target_tensor = tensorFromSentence(output_lang, pair[1]) return (input_tensor, target_tensor)
Seq2Seq 模型
PyTorch Seq2seq 模型是一种在模型上使用 PyTorch 编码器解码器的模型。编码器将逐字将句子编码为词汇表或具有索引的已知单词的索引,解码器将通过按顺序解码输入来预测编码输入的输出,并尝试使用最后一个输入作为下一个输入(如果可能)。使用此方法,还可以预测下一个输入以创建句子。每个句子将被分配一个标记以标记序列的结束。在预测结束时,还将有一个标记来标记输出的结束。因此,从编码器开始,它将状态传递给解码器以预测输出。
编码器将按顺序逐字编码我们的输入句子,最后会有一个标记来标记句子的结束。编码器由一个嵌入层和一个 GRU 层组成。嵌入层是一个查找表,它将我们的输入的嵌入存储到固定大小的词典中。它将被传递给 GRU 层。GRU 层是一个门控循环单元,由多层类型的 RNN 这将计算序列输入。此层将从前一个计算隐藏状态,并更新重置、更新和新门。
解码器将解码来自编码器输出的输入。它将尝试预测下一个输出,并在可能的情况下尝试将其用作下一个输入。解码器由嵌入层、GRU 层和线性层组成。嵌入层将为输出创建查找表并将其传递到 GRU 层以计算预测的输出状态。之后,线性层将有助于计算激活函数以确定预测输出的真实值。
class Encoder(nn.Module): def __init__(self, input_dim, hidden_dim, embbed_dim, num_layers): super(Encoder, self).__init__() #set the encoder input dimesion , embbed dimesion, hidden dimesion, and number of layers self.input_dim = input_dim self.embbed_dim = embbed_dim self.hidden_dim = hidden_dim self.num_layers = num_layers #initialize the embedding layer with input and embbed dimention self.embedding = nn.Embedding(input_dim, self.embbed_dim) #intialize the GRU to take the input dimetion of embbed, and output dimention of hidden and #set the number of gru layers self.gru = nn.GRU(self.embbed_dim, self.hidden_dim, num_layers=self.num_layers) def forward(self, src): embedded = self.embedding(src).view(1,1,-1) outputs, hidden = self.gru(embedded) return outputs, hidden class Decoder(nn.Module): def __init__(self, output_dim, hidden_dim, embbed_dim, num_layers): super(Decoder, self).__init__() #set the encoder output dimension, embed dimension, hidden dimension, and number of layers self.embbed_dim = embbed_dim self.hidden_dim = hidden_dim self.output_dim = output_dim self.num_layers = num_layers # initialize every layer with the appropriate dimension. For the decoder layer, it will consist of an embedding, GRU, a Linear layer and a Log softmax activation function. self.embedding = nn.Embedding(output_dim, self.embbed_dim) self.gru = nn.GRU(self.embbed_dim, self.hidden_dim, num_layers=self.num_layers) self.out = nn.Linear(self.hidden_dim, output_dim) self.softmax = nn.LogSoftmax(dim=1) def forward(self, input, hidden): # reshape the input to (1, batch_size) input = input.view(1, -1) embedded = F.relu(self.embedding(input)) output, hidden = self.gru(embedded, hidden) prediction = self.softmax(self.out(output[0])) return prediction, hidden class Seq2Seq(nn.Module): def __init__(self, encoder, decoder, device, MAX_LENGTH=MAX_LENGTH): super().__init__() #initialize the encoder and decoder self.encoder = encoder self.decoder = decoder self.device = device def forward(self, source, target, teacher_forcing_ratio=0.5): input_length = source.size(0) #get the input length (number of words in sentence) batch_size = target.shape[1] target_length = target.shape[0] vocab_size = self.decoder.output_dim #initialize a variable to hold the predicted outputs outputs = torch.zeros(target_length, batch_size, vocab_size).to(self.device) #encode every word in a sentence for i in range(input_length): encoder_output, encoder_hidden = self.encoder(source[i]) #use the encoder’s hidden layer as the decoder hidden decoder_hidden = encoder_hidden.to(device) #add a token before the first predicted word decoder_input = torch.tensor([SOS_token], device=device) # SOS #topk is used to get the top K value over a list #predict the output word from the current target word. If we enable the teaching force, then the #next decoder input is the next word, else, use the decoder output highest value. for t in range(target_length): decoder_output, decoder_hidden = self.decoder(decoder_input, decoder_hidden) outputs[t] = decoder_output teacher_force = random.random() < teacher_forcing_ratio topv, topi = decoder_output.topk(1) input = (target[t] if teacher_force else topi) if(teacher_force == False and input.item() == EOS_token): break return outputs
步骤3)训练模型
Seq2seq 模型中的训练过程从将每对句子从其 Lang 索引转换为张量开始。我们的序列到序列模型将使用 SGD 作为优化器并使用 NLLLoss 函数来计算损失。训练过程从将一对句子输入模型以预测正确的输出开始。在每个步骤中,模型的输出将使用真实单词进行计算以找到损失并更新参数。因此,由于您将使用 75000 次迭代,我们的序列到序列模型将从我们的数据集中随机生成 75000 对。
teacher_forcing_ratio = 0.5 def clacModel(model, input_tensor, target_tensor, model_optimizer, criterion): model_optimizer.zero_grad() input_length = input_tensor.size(0) loss = 0 epoch_loss = 0 # print(input_tensor.shape) output = model(input_tensor, target_tensor) num_iter = output.size(0) print(num_iter) #calculate the loss from a predicted sentence with the expected result for ot in range(num_iter): loss += criterion(output[ot], target_tensor[ot]) loss.backward() model_optimizer.step() epoch_loss = loss.item() / num_iter return epoch_loss def trainModel(model, source, target, pairs, num_iteration=20000): model.train() optimizer = optim.SGD(model.parameters(), lr=0.01) criterion = nn.NLLLoss() total_loss_iterations = 0 training_pairs = [tensorsFromPair(source, target, random.choice(pairs)) for i in range(num_iteration)] for iter in range(1, num_iteration+1): training_pair = training_pairs[iter - 1] input_tensor = training_pair[0] target_tensor = training_pair[1] loss = clacModel(model, input_tensor, target_tensor, optimizer, criterion) total_loss_iterations += loss if iter % 5000 == 0: avarage_loss= total_loss_iterations / 5000 total_loss_iterations = 0 print('%d %.4f' % (iter, avarage_loss)) torch.save(model.state_dict(), 'mytraining.pt') return model
步骤4)测试模型
Seq2seq PyTorch 的评估过程是检查模型输出。每对 Sequence 到 Sequence 模型将被输入到模型中并生成预测单词。之后,你将查看每个输出中的最高值以找到正确的索引。最后,你将比较我们的模型预测与真实句子
def evaluate(model, input_lang, output_lang, sentences, max_length=MAX_LENGTH): with torch.no_grad(): input_tensor = tensorFromSentence(input_lang, sentences[0]) output_tensor = tensorFromSentence(output_lang, sentences[1]) decoded_words = [] output = model(input_tensor, output_tensor) # print(output_tensor) for ot in range(output.size(0)): topv, topi = output[ot].topk(1) # print(topi) if topi[0].item() == EOS_token: decoded_words.append('<EOS>') break else: decoded_words.append(output_lang.index2word[topi[0].item()]) return decoded_words def evaluateRandomly(model, source, target, pairs, n=10): for i in range(n): pair = random.choice(pairs) print(‘source {}’.format(pair[0])) print(‘target {}’.format(pair[1])) output_words = evaluate(model, source, target, pair) output_sentence = ' '.join(output_words) print(‘predicted {}’.format(output_sentence))
现在,让我们开始使用 Seq to Seq 进行训练,迭代次数为 75000,RNN 层数为 1,隐藏层大小为 512。
lang1 = 'eng' lang2 = 'ind' source, target, pairs = process_data(lang1, lang2) randomize = random.choice(pairs) print('random sentence {}'.format(randomize)) #print number of words input_size = source.n_words output_size = target.n_words print('Input : {} Output : {}'.format(input_size, output_size)) embed_size = 256 hidden_size = 512 num_layers = 1 num_iteration = 100000 #create encoder-decoder model encoder = Encoder(input_size, hidden_size, embed_size, num_layers) decoder = Decoder(output_size, hidden_size, embed_size, num_layers) model = Seq2Seq(encoder, decoder, device).to(device) #print model print(encoder) print(decoder) model = trainModel(model, source, target, pairs, num_iteration) evaluateRandomly(model, source, target, pairs)
如你所见,我们预测的句子匹配得不是很好,所以为了获得更高的准确性,你需要用更多的数据进行训练,并尝试使用序列到序列学习添加更多的迭代和层数。
random sentence ['tom is finishing his work', 'tom sedang menyelesaikan pekerjaannya'] Input : 3551 Output : 4253 Encoder( (embedding): Embedding(3551, 256) (gru): GRU(256, 512) ) Decoder( (embedding): Embedding(4253, 256) (gru): GRU(256, 512) (out): Linear(in_features=512, out_features=4253, bias=True) (softmax): LogSoftmax() ) Seq2Seq( (encoder): Encoder( (embedding): Embedding(3551, 256) (gru): GRU(256, 512) ) (decoder): Decoder( (embedding): Embedding(4253, 256) (gru): GRU(256, 512) (out): Linear(in_features=512, out_features=4253, bias=True) (softmax): LogSoftmax() ) ) 5000 4.0906 10000 3.9129 15000 3.8171 20000 3.8369 25000 3.8199 30000 3.7957 35000 3.8037 40000 3.8098 45000 3.7530 50000 3.7119 55000 3.7263 60000 3.6933 65000 3.6840 70000 3.7058 75000 3.7044 > this is worth one million yen = ini senilai satu juta yen < tom sangat satu juta yen <EOS> > she got good grades in english = dia mendapatkan nilai bagus dalam bahasa inggris < tom meminta nilai bagus dalam bahasa inggris <EOS> > put in a little more sugar = tambahkan sedikit gula < tom tidak <EOS> > are you a japanese student = apakah kamu siswa dari jepang < tom kamu memiliki yang jepang <EOS> > i apologize for having to leave = saya meminta maaf karena harus pergi < tom tidak maaf karena harus pergi ke > he isnt here is he = dia tidak ada di sini kan < tom tidak <EOS> > speaking about trips have you ever been to kobe = berbicara tentang wisata apa kau pernah ke kobe < tom tidak <EOS> > tom bought me roses = tom membelikanku bunga mawar < tom tidak bunga mawar <EOS> > no one was more surprised than tom = tidak ada seorangpun yang lebih terkejut dari tom < tom ada orang yang lebih terkejut <EOS> > i thought it was true = aku kira itu benar adanya < tom tidak <EOS>