前言

在过去的几节中,我们一直在尽力手动完成大部分工作。我们探讨了标记器是如何工作的,并查看了标记化、转换为输入ID、填充、截断和注意力掩码。

然而,正如我们在第2节看到的,🤗 Transformers API 可以通过一个高级函数为我们处理所有这些操作,我们将在下面深入探讨这个函数。当你直接在句子上调用你的分词器时,你会得到可以传入模型的输入数据:

1
2
3
4
5
6
7
8
from transformers import AutoTokenizer

checkpoint = "distilbert-base-uncased-finetuned-sst-2-english"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)

sequence = "I've been waiting for a HuggingFace course my whole life."

model_inputs = tokenizer(sequence)

在这里,model_inputs 变量包含了模型正常运行所需的一切。对于 DistilBERT,这包括输入 ID 以及注意力掩码。接受额外输入的其他模型也将会由分词器对象输出这些额外的输入。

正如我们将在下面的例子中看到的,这个方法非常强大。首先,它可以对单个序列进行分词:

1
2
3
sequence = "I've been waiting for a HuggingFace course my whole life."

model_inputs = tokenizer(sequence)

它还可以同时处理多个序列,而无需更改 API:

1
2
3
sequences = ["I've been waiting for a HuggingFace course my whole life.", "So have I!"]

model_inputs = tokenizer(sequences)

它可以根据多个目标进行填充:

1
2
3
4
5
6
7
8
9
# Will pad the sequences up to the maximum sequence length
model_inputs = tokenizer(sequences, padding="longest")

# Will pad the sequences up to the model max length
# (512 for BERT or DistilBERT)
model_inputs = tokenizer(sequences, padding="max_length")

# Will pad the sequences up to the specified max length
model_inputs = tokenizer(sequences, padding="max_length", max_length=8)

它还可以截断序列:

1
2
3
4
5
6
7
8
sequences = ["I've been waiting for a HuggingFace course my whole life.", "So have I!"]

# Will truncate the sequences that are longer than the model max length
# (512 for BERT or DistilBERT)
model_inputs = tokenizer(sequences, truncation=True)

# Will truncate the sequences that are longer than the specified max length
model_inputs = tokenizer(sequences, max_length=8, truncation=True)

分词器对象可以处理转换为特定框架张量的工作,然后这些张量可以直接发送到模型。例如,在下面的代码示例中,我们提示分词器返回不同框架的张量 — “pt” 返回 PyTorch 张量,“tf” 返回 TensorFlow 张量,而 “np” 返回 NumPy 数组:

1
2
3
4
5
6
7
8
9
10
sequences = ["I've been waiting for a HuggingFace course my whole life.", "So have I!"]

# Returns PyTorch tensors
model_inputs = tokenizer(sequences, padding=True, return_tensors="pt")

# Returns TensorFlow tensors
model_inputs = tokenizer(sequences, padding=True, return_tensors="tf")

# Returns NumPy arrays
model_inputs = tokenizer(sequences, padding=True, return_tensors="np")

src link: https://huggingface.co/learn/nlp-course/chapter2/6

Operating System: Ubuntu 22.04.4 LTS

参考文档

  1. NLP Course - Putting it all together

特殊 tokens

如果我们看一下分词器返回的输入 ID,我们会发现它们与我们之前得到的略有不同:

1
2
3
4
5
6
7
8
sequence = "I've been waiting for a HuggingFace course my whole life."

model_inputs = tokenizer(sequence)
print(model_inputs["input_ids"])

tokens = tokenizer.tokenize(sequence)
ids = tokenizer.convert_tokens_to_ids(tokens)
print(ids)
1
2
[101, 1045, 1005, 2310, 2042, 3403, 2005, 1037, 17662, 12172, 2607, 2026, 2878, 2166, 1012, 102]
[1045, 1005, 2310, 2042, 3403, 2005, 1037, 17662, 12172, 2607, 2026, 2878, 2166, 1012]

在开始和结束处各增加了一个 token ID。让我们解码上述两个 ID 序列,看看这是什么:

1
2
print(tokenizer.decode(model_inputs["input_ids"]))
print(tokenizer.decode(ids))
1
2
"[CLS] i've been waiting for a huggingface course my whole life. [SEP]"
"i've been waiting for a huggingface course my whole life."

分词器在开始处添加了特殊词 [CLS],并在结束处添加了特殊词 [SEP]。这是因为模型是在这些特殊词的预训练下进行的,所以为了在推理时得到相同的结果,我们也需要添加它们。请注意,有些模型不添加特殊词,或者添加不同的特殊词;模型可能只在开始处添加这些特殊词,或者只在结束处添加。无论如何,分词器知道预期使用哪些特殊词,并将为您处理这个问题。

总结:从分词器到模型

现在我们已经看到了分词器对象在处理文本时使用的所有单独步骤,让我们最后一次看看它如何通过其主要 API 处理多个序列(填充!),非常长的序列(截断!),以及多种类型的张量:

1
2
3
4
5
6
7
8
9
10
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification

checkpoint = "distilbert-base-uncased-finetuned-sst-2-english"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSequenceClassification.from_pretrained(checkpoint)
sequences = ["I've been waiting for a HuggingFace course my whole life.", "So have I!"]

tokens = tokenizer(sequences, padding=True, truncation=True, return_tensors="pt")
output = model(**tokens)

结语

第二百一十三篇博文写完,开心!!!!

今天,也是充满希望的一天。