前言

记录读过的 PDF 版本的 大语言模型 论文.

操作系统:Windows 10 专业版

GPT-1

论文: Improving Language Understanding by Generative Pre-Training .

Abstract

Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. For instance, we achieve absolute improvements of 8.9% on commonsense reasoning (Stories Cloze Test), 5.7% on question answering (RACE), and 1.5% on textual entailment (MultiNLI).

项目地址:

  1. https://github.com/openai/finetune-transformer-lm .

论文 PDF 地址:

  1. https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf .

笔记 PDF 地址: https://cdn.jsdelivr.net/gh/LuYF-Lemon-love/susu-ChatGPT-papers/papers/03-GPT-1.pdf .


GPT-2

论文: Language Models are Unsupervised Multitask Learners .

Abstract

Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on task-specific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset - matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.

项目地址:

  1. https://github.com/openai/gpt-2 .

论文 PDF 地址:

  1. https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf .

笔记 PDF 地址: https://cdn.jsdelivr.net/gh/LuYF-Lemon-love/susu-ChatGPT-papers/papers/04-GPT-2.pdf .


GLM

论文: GLM: General Language Model Pretraining with Autoregressive Blank Infilling .

Abstract

There have been various types of pretraining architectures including autoencoding models (e.g., BERT), autoregressive models (e.g., GPT), and encoder-decoder models (e.g., T5). However, none of the pretraining frameworks performs the best for all tasks of three main categories including natural language understanding (NLU), unconditional generation, and conditional generation. We propose a General Language Model (GLM) based on autoregressive blank infilling to address this challenge. GLM improves blank filling pretraining by adding 2D positional encodings and allowing an arbitrary order to predict spans, which results in performance gains over BERT and T5 on NLU tasks. Meanwhile, GLM can be pretrained for different types of tasks by varying the number and lengths of blanks. On a wide range of tasks across NLU, conditional and unconditional generation, GLM outperforms BERT, T5, and GPT given the same model sizes and data, and achieves the best performance from a single pretrained model with 1.25× parameters of BERT Large , demonstrating its generalizability to different downstream tasks.

论文地址:

  1. https://aclanthology.org/2022.acl-long.26/ .

论文 PDF 地址:

  1. https://aclanthology.org/2022.acl-long.26.pdf .

笔记 PDF 地址: https://cdn.jsdelivr.net/gh/LuYF-Lemon-love/susu-ChatGPT-papers/papers/08-GLM.pdf .


InstructGPT

论文: Training language models to follow instructions with human feedback .

Abstract

Making language models bigger does not inherently make them better at following a user’s intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user. In other words, these models are not aligned with their users. In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback. Starting with a set of labeler-written prompts and prompts submitted through the OpenAI API, we collect a dataset of labeler demonstrations of the desired model behavior, which we use to fine-tune GPT-3 using supervised learning. We then collect a dataset of rankings of model outputs, which we use to further fine-tune this supervised model using reinforcement learning from human feedback. We call the resulting models InstructGPT. In human evaluations on our prompt distribution, outputs from the 1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters. Moreover, InstructGPT models show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets. Even though InstructGPT still makes simple mistakes, our results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent.

论文地址:

  1. https://arxiv.org/abs/2203.02155 .

论文 PDF 地址:

  1. https://arxiv.org/pdf/2203.02155.pdf .

笔记 PDF 地址: https://cdn.jsdelivr.net/gh/LuYF-Lemon-love/susu-ChatGPT-papers/papers/05-InstructGPT.pdf .


P-Tuning v2

论文: P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks .

Abstract

Prompt tuning, which only tunes continuous prompts with a frozen language model, substantially reduces per-task storage and memory usage at training. However, in the context of NLU, prior work reveals that prompt tuning does not perform well for normal-sized pretrained models. We also find that existing methods of prompt tuning cannot handle hard sequence labeling tasks, indicating a lack of universality. We present a novel empirical finding that properly optimized prompt tuning can be universally effective across a wide range of model scales and NLU tasks. It matches the performance of finetuning while having only 0.1%-3% tuned parameters. Our method P-Tuning v2 is an implementation of Deep Prompt Tuning \cite{li2021prefix,qin2021learning} optimized and adapted for NLU. Given the universality and simplicity of P-Tuning v2, we believe it can serve as an alternative to finetuning and a strong baseline for future research.Our code and data are released at this https URL.

项目地址:

  1. https://github.com/THUDM/P-tuning-v2 .

论文地址:

  1. https://arxiv.org/abs/2110.07602 .

PDF 地址:

  1. https://arxiv.org/pdf/2110.07602.pdf .
1
2
3
4
5
6
7
8
@misc{liu2022ptuning,
title={P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks},
author={Xiao Liu and Kaixuan Ji and Yicheng Fu and Weng Lam Tam and Zhengxiao Du and Zhilin Yang and Jie Tang},
year={2022},
eprint={2110.07602},
archivePrefix={arXiv},
primaryClass={cs.CL}
}

笔记 PDF 地址: https://cdn.jsdelivr.net/gh/LuYF-Lemon-love/susu-ChatGPT-papers/papers/01-P-Tuning-v2.pdf .


LoRA

论文: LoRA: Low-Rank Adaptation of Large Language Models .

Abstract

An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example – deploying independent instances of fine-tuned models, each with 175B parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency. We also provide an empirical investigation into rank-deficiency in language model adaptation, which sheds light on the efficacy of LoRA. We release a package that facilitates the integration of LoRA with PyTorch models and provide our implementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at this https URL.

项目地址:

  1. https://github.com/microsoft/LoRA .

论文地址:

  1. https://arxiv.org/abs/2106.09685 .

PDF 地址:

  1. https://arxiv.org/pdf/2106.09685.pdf .
1
2
3
4
5
6
7
8
@misc{hu2021lora,
title={LoRA: Low-Rank Adaptation of Large Language Models},
author={Edward J. Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen},
year={2021},
eprint={2106.09685},
archivePrefix={arXiv},
primaryClass={cs.CL}
}

笔记 PDF 地址: https://cdn.jsdelivr.net/gh/LuYF-Lemon-love/susu-ChatGPT-papers/papers/02-LoRA.pdf .


DPO

论文: Direct Preference Optimization: Your Language Model is Secretly a Reward Model .

Abstract

While large-scale unsupervised language models (LMs) learn broad world knowledge and some reasoning skills, achieving precise control of their behavior is difficult due to the completely unsupervised nature of their training. Existing methods for gaining such steerability collect human labels of the relative quality of model generations and fine-tune the unsupervised LM to align with these preferences, often with reinforcement learning from human feedback (RLHF). However, RLHF is a complex and often unstable procedure, first fitting a reward model that reflects the human preferences, and then fine-tuning the large unsupervised LM using reinforcement learning to maximize this estimated reward without drifting too far from the original model. In this paper, we leverage a mapping between reward functions and optimal policies to show that this constrained reward maximization problem can be optimized exactly with a single stage of policy training, essentially solving a classification problem on the human preference data. The resulting algorithm, which we call Direct Preference Optimization (DPO), is stable, performant and computationally lightweight, eliminating the need for fitting a reward model, sampling from the LM during fine-tuning, or performing significant hyperparameter tuning. Our experiments show that DPO can fine-tune LMs to align with human preferences as well as or better than existing methods. Notably, fine-tuning with DPO exceeds RLHF’s ability to control sentiment of generations and improves response quality in summarization and single-turn dialogue while being substantially simpler to implement and train.

论文地址:

  1. https://arxiv.org/abs/2305.18290 .

PDF 地址:

  1. https://arxiv.org/pdf/2305.18290.pdf .

笔记 PDF 地址: https://cdn.jsdelivr.net/gh/LuYF-Lemon-love/susu-ChatGPT-papers/papers/07-DPO.pdf .


结语

第七十四篇博文写完,开心!!!!

今天,也是充满希望的一天。