前言
简单的介绍如何Fine Tuning Llama。
src link: https://github.com/LuYF-Lemon-love/fork-huggingface-llama-recipes
Llama Models: https://modelscope.cn/organization/LLM-Research
Operating System: Ubuntu 22.04.4 LTS
参考文档
- NLP Course - Fine Tuning
介绍
通常只在模型上运行推理是不够的。很多时候,您需要在一些自定义数据集上对模型进行微调。以下是一些脚本,展示了如何对模型进行微调。
peft finetuning
src link: https://github.com/LuYF-Lemon-love/fork-huggingface-llama-recipes/blob/main/fine_tune/peft_finetuning.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
|
import torch from datasets import load_dataset
from trl import SFTConfig, SFTTrainer from peft import LoraConfig from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from modelscope import snapshot_download
model_dir = snapshot_download('LLM-Research/Llama-3.2-3B')
tokenizer = AutoTokenizer.from_pretrained(model_dir) model = AutoModelForCausalLM.from_pretrained(model_dir)
tokenizer.pad_token = tokenizer.eos_token
dataset = load_dataset("imdb", split="train")
sft_config = SFTConfig( dataset_text_field="text", per_device_train_batch_size=4, max_seq_length=20, num_train_epochs=3, output_dir="./results", logging_dir='./logs', logging_steps=10, )
QLoRA = True if QLoRA: quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.float16, bnb_4bit_quant_type="nf4" ) lora_config = LoraConfig( r=8, target_modules="all-linear", bias="none", task_type="CAUSAL_LM", ) else: lora_config = None
trainer = SFTTrainer( model=model, tokenizer=tokenizer, args=sft_config, peft_config=lora_config, train_dataset=dataset, )
trainer.train()
|
结语
第二百零一篇博文写完,开心!!!!
今天,也是充满希望的一天。