add Qwen2-7B-Instruct_lora

This commit is contained in:
aJupyter 2024-09-14 18:34:31 +08:00
parent 8438cdfc26
commit 198600a774
5 changed files with 265 additions and 36 deletions

2
.gitignore vendored
View File

@ -6,6 +6,8 @@ data/
pdf/
.idea/
logs/
.vscode/
work_dirs/
# *.jsonl
# *.json

View File

@ -59,8 +59,9 @@
| ChatGLM3_6B | LoRA | [chatglm3_6b_lora_alpaca_e3.py](./xtuner_config/chatglm3_6b_lora_alpaca_e3.py) | |
| DeepSeek MoE_16B_chat | QLoRA | [deepseek_moe_16b_chat_qlora_oasst1_e3.py](./xtuner_config/deepseek_moe_16b_chat_qlora_oasst1_e3.py) | |
| Mixtral 8x7B_instruct | QLoRA | [mixtral_8x7b_instruct_qlora_oasst1_e3.py](./xtuner_config/mixtral_8x7b_instruct_qlora_oasst1_e3.py) | |
| LLaMA3_8b_instruct | QLoRA | [aiwei_llama3_8b_instruct_qlora_e3.py](./xtuner_config/aiwei_llama3_8b_instruct_qlora_e3.py) | [OpenXLab](https://openxlab.org.cn/models/detail/ajupyter/EmoLLM-LLaMA3_8b_instruct_aiwei/tree/main), [ModelScope](https://modelscope.cn/models/aJupyter/EmoLLM-LLaMA3_8b_instruct_aiwei/files) |
| LLaMA3_8b_instruct | QLoRA | [llama3_8b_instruct_qlora_alpaca_e3_M_ruozhi_scM.py](./xtuner_config/llama3_8b_instruct_qlora_alpaca_e3_M_ruozhi_scM.py) |[OpenXLab](https://openxlab.org.cn/models/detail/chg0901/EmoLLM-Llama3-8B-Instruct3.0), [ModelScope](https://modelscope.cn/models/chg0901/EmoLLM-Llama3-8B-Instruct3.0/summary) |
| LLaMA3_8B_instruct | QLoRA | [aiwei_llama3_8b_instruct_qlora_e3.py](./xtuner_config/aiwei_llama3_8b_instruct_qlora_e3.py) | [OpenXLab](https://openxlab.org.cn/models/detail/ajupyter/EmoLLM-LLaMA3_8b_instruct_aiwei/tree/main), [ModelScope](https://modelscope.cn/models/aJupyter/EmoLLM-LLaMA3_8b_instruct_aiwei/files) |
| LLaMA3_8B_instruct | QLoRA | [llama3_8b_instruct_qlora_alpaca_e3_M_ruozhi_scM.py](./xtuner_config/llama3_8b_instruct_qlora_alpaca_e3_M_ruozhi_scM.py) |[OpenXLab](https://openxlab.org.cn/models/detail/chg0901/EmoLLM-Llama3-8B-Instruct3.0), [ModelScope](https://modelscope.cn/models/chg0901/EmoLLM-Llama3-8B-Instruct3.0/summary) |
| Qwen2-7B-Instruct | LoRA | [Qwen2-7B-Instruct_lora.py](./xtuner_config/Qwen2-7B-Instruct_lora.py) |[ModelScope](https://www.modelscope.cn/models/aJupyter/EmoLLM_Qwen2-7B-Instruct_lora/) |
| …… | …… | …… | …… |
</div>
@ -100,6 +101,8 @@
</table>
## 🎇最近更新
- 【2024.09.14】基于Qwen2-7B-Instruct模型的Lora微调模型开源微调配置文件地址[Qwen2-7B-Instruct_lora.py](./xtuner_config/Qwen2-7B-Instruct_lora.py) ,模型权重链接:[ModelScope](https://www.modelscope.cn/models/aJupyter/EmoLLM_Qwen2-7B-Instruct_lora/)
- 【2024.08】基于GLM4-9B-chat微调Lora模型开源基于LLaMA-Factory详情见[微调教程](./doc/GLM-4-9B-chat%20Lora%20微调llama-factory.md) ,模型权重链接:[ModelScope](https://www.modelscope.cn/models/wwewwt/EmoLLM-glm-4-9b-chat/summary)
- 【2024.07.16】欢迎大家体验 EmoLLM V3.0 该模型是基于InternLM2.5-7B-Chat模型的全量微调微调配置文件地址[internlm2_5_chat_7b_full.py](./xtuner_config/internlm2_5_chat_7b_full.py) ,模型权重链接:[OpenXLab](https://openxlab.org.cn/models/detail/chg0901/EmoLLM_V3.0), [ModelScope](https://modelscope.cn/models/chg0901/EmoLLMV3.0) WebDemo地址 [OpenXLab apps](https://openxlab.org.cn/apps/detail/chg0901/EmoLLMV3.0), [配套全量微调知乎教程](https://zhuanlan.zhihu.com/p/708931911)。
- 【2024.07】欢迎大家使用稳定版 EmoLLM V2.0 进行日常使用和学术研究,模型权重链接:[OpenXLab](https://openxlab.org.cn/models/detail/ajupyter/EmoLLM_internlm2_7b_full/tree/main)。
- 【2024.07】新增基于InternLM2_5_7B_chat[微调配置](./xtuner_config/internlm2_5_chat_7b_qlora_oasst1_e3.py)、模型文件发布在 [ModelScope](https://www.modelscope.cn/models/z342994309/emollm_interlm2_5/)。
@ -117,15 +120,14 @@
- 【2024.03.11】 **EmoLLM V2.0 相比 EmoLLM V1.0 全面提升,已超越 Role-playing ChatGPT 在心理咨询任务上的能力!**[点击体验EmoLLM V2.0](https://openxlab.org.cn/apps/detail/Farewell1/EmoLLMV2.0),更新[数据集统计及详细信息](./datasets/)、[路线图](./assets/Roadmap_ZH.png)
- 【2024.03.09】 新增并发功能加速 [QA 对生成](./scripts/qa_generation/)、[RAG pipeline](./rag/)
- 【2024.03.03】 [基于InternLM2-7B-chat全量微调版本EmoLLM V2.0开源](https://openxlab.org.cn/models/detail/ajupyter/EmoLLM_internlm2_7b_full)需要两块A100*80G更新专业评估详见[evaluate](./evaluate/)更新基于PaddleOCR的PDF转txt工具脚本详见[scripts](./scripts/)
<details>
<summary>查看更多</summary>
- 【2024.02.29】更新客观评估计算,详见[evaluate](./evaluate/),更新一系列数据集,详见[datasets](./datasets/)
- 【2024.02.27】更新英文readme和一系列数据集舔狗和单轮对话
- 【2024.02.23】推出基于InternLM2_7B_chat_qlora的 `温柔御姐心理医生艾薇`[点击获取模型权重](https://openxlab.org.cn/models/detail/ajupyter/EmoLLM_aiwei)[配置文件](xtuner_config/aiwei-internlm2_chat_7b_qlora.py)[在线体验链接](https://openxlab.org.cn/apps/detail/ajupyter/EmoLLM-aiwei)
- 【2024.02.23】更新[若干微调配置](/xtuner_config/),新增 [data_pro.json](/datasets/data_pro.json)(数量更多、场景更全、更丰富)和 [aiwei.json](/datasets/aiwei.json)温柔御姐角色扮演专用带有Emoji表情即将推出 `温柔御姐心理医生艾薇`
- 【2024.02.18】 [基于Qwen1_5-0_5B-Chat全量微调版本开源](https://www.modelscope.cn/models/aJupyter/EmoLLM_Qwen1_5-0_5B-Chat_full_sft/summary),算力有限的道友可以玩起来~
<details>
<summary>查看更多</summary>
- 【2024.02.06】 EmoLLM在[**Openxlab** ](https://openxlab.org.cn/models/detail/jujimeizuo/EmoLLM_Model) 平台下载量高达18.7k,欢迎大家体验!
<p align="center">
@ -185,7 +187,7 @@
- [使用指南](#使用指南)
- [🍪快速体验](#快速体验)
- [📌数据构建](#数据构建)
- [🎨微调指南](#微调指南)
- [🎨增量预训练、微调指南](#增量预训练微调指南)
- [🔧部署指南](#部署指南)
- [⚙RAG(检索增强生成)](#rag检索增强生成)
- [🎓评测指南](#评测指南)
@ -204,6 +206,7 @@
###### 开发前的配置要求
- 硬件A100 40G仅针对InternLM2_7B_chat+qlora微调+deepspeed zero2优化
- todo发布更多硬件消耗细节
###### 使用指南
@ -216,7 +219,7 @@ git clone https://github.com/SmartFlowAI/EmoLLM.git
2. 依次阅读或者选择感兴趣的部分阅读:
- [快速体验](#快速体验)
- [数据构建](#数据构建)
- [微调指南](#微调指南)
- [增量预训练、微调指南](#增量预训练微调指南)
- [部署指南](#部署指南)
- [RAG](#rag检索增强生成)
- [评测指南](#评测指南)
@ -230,19 +233,21 @@ git clone https://github.com/SmartFlowAI/EmoLLM.git
### 📌数据构建
- 请阅读[数据构建指南](generate_data/tutorial.md)查阅
- 微调用到的数据集见[datasets](datasets/data.json)
### 🎨微调指南
详见[微调指南](xtuner_config/README.md)
### 🎨增量预训练、微调指南
- 增量预训练详见[增量预训练指南](./xtuner_config/pt/README.md)
- 【基于xtuner】全量、LoRA、QLoRA微调详见[微调指南](./xtuner_config/README.md)
- 【基于ms-swift】全量、LoRA、QLoRA微调详见[微调指南](./swift/README.md)
- 【基于LLaMA-Factory】全量、LoRA、QLoRA微调详见[微调指南](./doc/GLM-4-9B-chat%20Lora%20微调llama-factory.md)
- todo待更新DPO训练
### 🔧部署指南
- Demo部署详见[部署指南](demo/README.md)
- 基于[LMDeploy](https://github.com/InternLM/lmdeploy/)的量化部署:详见[deploy](./deploy/lmdeploy.md)
- todo: 基于VLLM部署指南
### ⚙RAG(检索增强生成)
@ -257,13 +262,14 @@ git clone https://github.com/SmartFlowAI/EmoLLM.git
### 使用到的框架
- [Xtuner](https://github.com/InternLM/xtuner):用于微调
- [xtuner](https://github.com/InternLM/xtuner):用于微调
- [Transformers](https://github.com/huggingface/transformers)
- [Pytorch](https://pytorch.org/)
- [LMDeploy](https://github.com/InternLM/lmdeploy/):用于量化部署
- [Stremlit](https://streamlit.io/)用于构建Demo
- [DeepSpeed](https://github.com/microsoft/DeepSpeed):并行训练
- …
- [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory/blob/main):训练框架
- [ms-swift](https://github.com/modelscope/ms-swift):训练框架
#### 如何参与本项目

View File

@ -63,6 +63,7 @@
| Mixtral 8x7B_instruct | QLoRA | [mixtral_8x7b_instruct_qlora_oasst1_e3.py](./xtuner_config/mixtral_8x7b_instruct_qlora_oasst1_e3.py) | |
| LLaMA3_8b_instruct | QLoRA | [aiwei_llama3_8b_instruct_qlora_e3.py](./xtuner_config/aiwei_llama3_8b_instruct_qlora_e3.py) | [OpenXLab](https://openxlab.org.cn/models/detail/ajupyter/EmoLLM-LLaMA3_8b_instruct_aiwei/tree/main), [ModelScope](https://modelscope.cn/models/aJupyter/EmoLLM-LLaMA3_8b_instruct_aiwei/files) |
| LLaMA3_8b_instruct | QLoRA | [llama3_8b_instruct_qlora_alpaca_e3_M_ruozhi_scM.py](./xtuner_config/llama3_8b_instruct_qlora_alpaca_e3_M_ruozhi_scM.py) |[OpenXLab](https://openxlab.org.cn/models/detail/chg0901/EmoLLM-Llama3-8B-Instruct3.0), [ModelScope](https://modelscope.cn/models/chg0901/EmoLLM-Llama3-8B-Instruct3.0/summary) |
| Qwen2-7B-Instruct | LoRA | [Qwen2-7B-Instruct_lora.py](./xtuner_config/Qwen2-7B-Instruct_lora.py) |[ModelScope](https://www.modelscope.cn/models/aJupyter/EmoLLM_Qwen2-7B-Instruct_lora/) |
| …… | …… | …… | …… |
@ -104,11 +105,13 @@ The Model aims to fully understand and promote the mental health of individuals,
</table>
## Recent Updates
- 【2024.07.16】 Welcome everyone to experience EmoLLM V3.0. This model is a fully fine-tuned version based on the InternLM2.5-7B-Chat model. The fine-tuning configuration file can be found at: [internlm2_5_chat_7b_full.py](./xtuner_config/internlm2_5_chat_7b_full.py). Model weights are available at: [OpenXLab](https://openxlab.org.cn/models/detail/chg0901/EmoLLM_V3.0), [ModelScope](https://modelscope.cn/models/chg0901/EmoLLMV3.0). WebDemo is available at: [OpenXLab apps](https://openxlab.org.cn/apps/detail/chg0901/EmoLLMV3.0), [Full fine-tuning tutorial on Zhihu](https://zhuanlan.zhihu.com/p/708931911).
- 【2024.07】Welcome to use the stable version of EmoLLM V2.0 for daily use and academic research. Model weight link: [OpenXLab](https://openxlab.org.cn/models/detail/ajupyter/EmoLLM_internlm2_7b_full/tree/main).
- 【2024.07】Added InternLM2_5_7B_chat[fine-tuning configuration](./xtuner_config/internlm2_5_chat_7b_qlora_oasst1_e3.py)、model file [ModelScope](https://www.modelscope.cn/models/z342994309/emollm_interlm2_5/)。
- 【2024.06】 Added [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)[GLM4-9B-chat fine-tuning guide](./doc/GLM-4-9B-chat%20Lora%20微调llama-factory.md), added [swift-based fine-tuning guide](./swift/), the paper [ESC-Eval: Evaluating Emotion Support Conversations in Large Language Models](https://arxiv.org/abs/2406.14952) cited EmoLLM and EmoLLM achieved good results.
- 【2024.05.28】The multi-turn dialogue dataset **CPsyCunD** and **professional evaluation method** used by EmoLLM have been released. For details, please see the 2024 ACL findings[《CPsyCoun: A Report-based Multi-turn Dialogue Reconstruction and Evaluation Framework for Chinese Psychological Counseling》](https://arxiv.org/abs/2405.16433)!
- [2024.09.14] The Lora fine-tuned model based on the Qwen2-7B-Instruct model is open-sourced. Fine-tuning configuration file address: [Qwen2-7B-Instruct_lora.py](./xtuner_config/Qwen2-7B-Instruct_lora.py), model weight link: [ModelScope](https://www.modelscope.cn/models/aJupyter/EmoLLM_Qwen2-7B-Instruct_lora/)
- [2024.08] The Lora fine-tuned model based on GLM4-9B-chat is open-sourced (based on Llama-factory). For details, see [Fine-tuning Tutorial](./doc/GLM-4-9B-chat%20Lora%20微调llama-factory.md), model weight link: [ModelScope](https://www.modelscope.cn/models/wwewwt/EmoLLM-glm-4-9b-chat/summary)
- [2024.07.16] Welcome everyone to experience EmoLLM V3.0. This model is a fully fine-tuned version based on the InternLM2.5-7B-Chat model. The fine-tuning configuration file can be found at: [internlm2_5_chat_7b_full.py](./xtuner_config/internlm2_5_chat_7b_full.py). Model weights are available at: [OpenXLab](https://openxlab.org.cn/models/detail/chg0901/EmoLLM_V3.0), [ModelScope](https://modelscope.cn/models/chg0901/EmoLLMV3.0). WebDemo is available at: [OpenXLab apps](https://openxlab.org.cn/apps/detail/chg0901/EmoLLMV3.0), [Full fine-tuning tutorial on Zhihu](https://zhuanlan.zhihu.com/p/708931911).
- [2024.07] Welcome to use the stable version of EmoLLM V2.0 for daily use and academic research. Model weight link: [OpenXLab](https://openxlab.org.cn/models/detail/ajupyter/EmoLLM_internlm2_7b_full/tree/main).
- [2024.07] Added InternLM2_5_7B_chat[fine-tuning configuration](./xtuner_config/internlm2_5_chat_7b_qlora_oasst1_e3.py)、model file [ModelScope](https://www.modelscope.cn/models/z342994309/emollm_interlm2_5/)。
- [2024.06] Added [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)[GLM4-9B-chat fine-tuning guide](./doc/GLM-4-9B-chat%20Lora%20微调llama-factory.md), added [swift-based fine-tuning guide](./swift/), the paper [ESC-Eval: Evaluating Emotion Support Conversations in Large Language Models](https://arxiv.org/abs/2406.14952) cited EmoLLM and EmoLLM achieved good results.
- [2024.05.28] The multi-turn dialogue dataset **CPsyCunD** and **professional evaluation method** used by EmoLLM have been released. For details, please see the 2024 ACL findings[《CPsyCoun: A Report-based Multi-turn Dialogue Reconstruction and Evaluation Framework for Chinese Psychological Counseling》](https://arxiv.org/abs/2405.16433)!
- [2024.05.08] EmoLLM**Daddy-like BF V0.1** is public now in [1. **Baidu AppBuilder**](https://appbuilder.baidu.com/s/4cLyw) and [2. **OpenXLab**](https://openxlab.org.cn/apps/detail/chg0901/EmoLLM3.0_Gradio_Llama3-8B-Instruct3.0), welcome to like and add it to your collections!
- [2024.05.07] [Incremental Pre-training Guide](xtuner_config/pt/README.md)
- [2024.05.04] [EmoLLM3.0 OpenXLab Demo](https://st-app-center-006861-9746-jlroxvg.openxlab.space/) based on LLaMA3_8b_instruct is available now ([restart link]((https://openxlab.org.cn/apps/detail/chg0901/EmoLLM-Llama3-8B-Instruct3.0))), [LLAMA3 fine-tuning guide](xtuner_config/README_llama3_8b_instruct_qlora_alpaca_e3_M.md) is updated, LLaMA3_8b_instruct-8B QLoRA fine-tuning model EmoLLM3.0 weights are released on [**OpenXLab**](https://openxlab.org.cn/models/detail/chg0901/EmoLLM-Llama3-8B-Instruct3.0) and [**ModelScope**](https://modelscope.cn/models/chg0901/EmoLLM-Llama3-8B-Instruct3.0/summary) platforms
@ -122,6 +125,10 @@ The Model aims to fully understand and promote the mental health of individuals,
- [2024.03.11] **EmoLLM V2.0 is greatly improved in all scores compared to EmoLLM V1.0. Surpasses the performance of Role-playing ChatGPT on counseling tasks!** [Click to experience EmoLLM V2.0](https://openxlab.org.cn/apps/detail/Farewell1/EmoLLMV2.0), update [dataset statistics and details](./datasets/), [Roadmap](./assets/Roadmap_ZH.png)
- [2024.03.09] Add concurrency acceleration [QA pair generation](./scripts/qa_generation/), [RAG pipeline](./rag/)
- [2024.03.03] [Based on InternLM2-7B-chat full fine-tuned version EmoLLM V2.0 open sourced](https://openxlab.org.cn/models/detail/ajupyter/EmoLLM_internlm2_7b_full), need two A100*80G, update professional evaluation, see [evaluate](./evaluate/), update PaddleOCR-based PDF to txt tool scripts, see [scripts](./scripts/).
<details>
<summary>View More</summary>
- [2024.02.29] Updated objective assessment calculations, see [evaluate](./evaluate/) for details. A series of datasets have also been updated, see [datasets](./datasets/) for details.
- [2024.02.27] Updated English README and a series of datasets (licking dogs and one-round dialogue)
- [2024.02.23]The "Gentle Lady Psychologist Ai Wei" based on InternLM2_7B_chat_qlora was launched. [Click here to obtain the model weights](https://openxlab.org.cn/models/detail/ajupyter/EmoLLM_aiwei), [configuration file](xtuner_config/aiwei-internlm2_chat_7b_qlora.py), [online experience link](https://openxlab.org.cn/apps/detail/ajupyter/EmoLLM-aiwei)
@ -129,11 +136,7 @@ The Model aims to fully understand and promote the mental health of individuals,
- [2024.02.23]Updated [several fine-tuning configurations](/xtuner_config/), added [data_pro.json](/datasets/data_pro.json) (more quantity, more comprehensive scenarios, richer content) and [aiwei.json](/datasets/aiwei.json) (dedicated to the gentle lady role-play, featuring Emoji expressions), the "Gentle Lady Psychologist Ai Wei" is coming soon.
- [2024.02.18] The full fine-tuned version based on Qwen1_5-0_5B-Chat has been [open-sourced](https://www.modelscope.cn/models/aJupyter/EmoLLM_Qwen1_5-0_5B-Chat_full_sft/summary). Friends with limited computational resources can now dive in and explore it.
<details>
<summary>View More</summary>
- [2024.02.06] [Open-sourced based on the Qwen1_5-0_5B-Chat full-scale fine-tuned version](https://www.modelscope.cn/models/aJupyter/EmoLLM_Qwen1_5-0_5B-Chat_full_sft/summary), friends with limited computing power can start experimenting~
<p align="center">
@ -187,7 +190,7 @@ The Model aims to fully understand and promote the mental health of individuals,
- [User Guide](#user-guide)
- [🍪Quick start](#quick-start)
- [📌Data Construction](#data-construction)
- [🎨Fine-tuning Guide](#fine-tuning-guide)
- [🎨Incremental Pre-training and Fine-tuning Guide](#incremental-pre-training-and-fine-tuning-guide)
- [🔧Deployment Guide](#deployment-guide)
- [⚙RAG (Retrieval Augmented Generation)](#rag-retrieval-augmented-generation)
- [🎓Evaluation Guide](#evaluation-guide)
@ -206,6 +209,7 @@ The Model aims to fully understand and promote the mental health of individuals,
###### Pre-development Configuration Requirements.
- A100 40G (specifically for InternLM2_7B_chat + qlora fine-tuning + deepspeed zero2 optimization)
- **[TODO]**: Publish more details about hardware consumption.
###### User Guide
@ -218,7 +222,7 @@ git clone https://github.com/SmartFlowAI/EmoLLM.git
1. Read in sequence or read sections you're interested in
- [Quick Start](#quick-start)
- [Data Construction](#data-construction)
- [Fine-tuning Guide](#fine-tuning-guide)
- [Fine-tuning Guide](#incremental-pre-training-and-fine-tuning-guide)
- [Deployment Guide](#deployment-guide)
- [RAG](#rag-retrieval-augmented-generation)
- [Evaluation Guide](#evaluation-guide)
@ -230,19 +234,22 @@ git clone https://github.com/SmartFlowAI/EmoLLM.git
- Quick coding: [Baby EmoLLM](quick_start/Baby_EmoLLM.ipynb)
### 📌Data Construction
- Please read the [Data Construction Guide ](generate_data/tutorial_EN.md) for reference.
- The dataset used for this fine-tuning can be found at [datasets](datasets/data.json)
### 🎨Fine-tuning Guide
### 🎨Incremental Pre-training and Fine-tuning Guide
- For details on incremental pre-training, see [Incremental Pre-training Guide](./xtuner_config/pt/README.md).
- For full-scale, LoRA, and QLoRA fine-tuning based on **xtuner**, see [Fine-tuning Guide](./xtuner_config/README_EN.md).
- For full-scale, LoRA, and QLoRA fine-tuning based on **ms-swift**, see [Fine-tuning Guide](./swift/README_EN.md).
- For full-scale, LoRA, and QLoRA fine-tuning based on **LLaMA-Factory**, see [Fine-tuning Guide](./doc/GLM-4-9B-chat%20Lora%20微调llama-factory.md).
- **[TODO]**: Update DPO training.
For details, see the [fine-tuning guide](xtuner_config/README_EN.md)
### 🔧Deployment Guide
- Demo deployment: see [deployment guide](./demo/README_EN.md) for details.
- Quantitative deployment based on [LMDeploy](https://github.com/InternLM/lmdeploy/): see [deploy](./deploy/lmdeploy_EN.md)
- **[TODO]**: Deployment Guide for VLLM
### ⚙RAG (Retrieval Augmented Generation)
@ -263,7 +270,8 @@ For details, see the [fine-tuning guide](xtuner_config/README_EN.md)
- [LMDeploy](https://github.com/InternLM/lmdeploy/): for quantitative deployment
- [Stremlit](https://streamlit.io/): for building demos
- [DeepSpeed](https://github.com/microsoft/DeepSpeed): for parallel training
- …
- [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory/blob/main)
- [ms-swift](https://github.com/modelscope/ms-swift)
#### How to participate in this project

View File

@ -1,11 +1,11 @@
from modelscope.hub.api import HubApi
YOUR_ACCESS_TOKEN = '' #输入你的modelscope access token
YOUR_ACCESS_TOKEN = '' # 输入你的modelscope access token
api = HubApi()
api.login(YOUR_ACCESS_TOKEN)
api.push_model(
model_id="zealot5209/EmoLLM-Scientist", #your_name/model_id
model_dir="./merged" # 本地模型目录要求目录中必须包含configuration.json
)
model_id="zealot5209/EmoLLM-Scientist", # your_name/model_id
model_dir="./merged" # 本地模型目录要求目录中必须包含configuration.json
)

View File

@ -0,0 +1,213 @@
# Copyright (c) OpenMMLab. All rights reserved.
import torch
from datasets import load_dataset
from mmengine.dataset import DefaultSampler
from mmengine.hooks import (CheckpointHook, DistSamplerSeedHook, IterTimerHook,
LoggerHook, ParamSchedulerHook)
from mmengine.optim import AmpOptimWrapper, CosineAnnealingLR, LinearLR
from peft import LoraConfig
from torch.optim import AdamW
from transformers import (AutoModelForCausalLM, AutoTokenizer,
BitsAndBytesConfig)
from xtuner.dataset import process_hf_dataset
from xtuner.dataset.collate_fns import default_collate_fn
from xtuner.dataset.map_fns import alpaca_map_fn, template_map_fn_factory
from xtuner.engine.hooks import (DatasetInfoHook, EvaluateChatHook,
VarlenAttnArgsToMessageHubHook)
from xtuner.engine.runner import TrainLoop
from xtuner.model import SupervisedFinetune
from xtuner.parallel.sequence import SequenceParallelSampler
from xtuner.utils import PROMPT_TEMPLATE, SYSTEM_TEMPLATE
#######################################################################
# PART 1 Settings #
#######################################################################
# Model
pretrained_model_name_or_path = 'Qwen2-7B-Instruct' # your model path
use_varlen_attn = False
# Data
alpaca_en_path = '../datasets/aiwei.json'
prompt_template = PROMPT_TEMPLATE.qwen_chat
max_length = 1024
pack_to_max_length = True
# parallel
sequence_parallel_size = 1
# Scheduler & Optimizer
batch_size = 8 # per_device
accumulative_counts = 16
accumulative_counts *= sequence_parallel_size
dataloader_num_workers = 4
max_epochs = 3
optim_type = AdamW
lr = 1e-5
betas = (0.9, 0.999)
weight_decay = 0
max_norm = 1 # grad clip
warmup_ratio = 0.03
# Save
save_steps = 100
save_total_limit = 2 # Maximum checkpoints to keep (-1 means unlimited)
# Evaluate the generation performance during the training
evaluation_freq = 100
SYSTEM = "现在你是一个心理专家,我有一些心理问题,请你用专业的知识帮我解决。"
evaluation_inputs = [
'我压力很大', '生活没意思', "非常容易羡慕别人啊"
]
#######################################################################
# PART 2 Model & Tokenizer #
#######################################################################
tokenizer = dict(
type=AutoTokenizer.from_pretrained,
pretrained_model_name_or_path=pretrained_model_name_or_path,
trust_remote_code=True,
padding_side='right')
model = dict(
type=SupervisedFinetune,
use_varlen_attn=use_varlen_attn,
llm=dict(
type=AutoModelForCausalLM.from_pretrained,
pretrained_model_name_or_path=pretrained_model_name_or_path,
trust_remote_code=True,
torch_dtype=torch.float16,
),
lora=dict(
type=LoraConfig,
r=32,
lora_alpha=16,
lora_dropout=0.1,
bias='none',
task_type='CAUSAL_LM'))
#######################################################################
# PART 3 Dataset & Dataloader #
#######################################################################
alpaca_en = dict(
type=process_hf_dataset,
dataset=dict(type=load_dataset, path='json',
data_files=dict(train=alpaca_en_path)),
tokenizer=tokenizer,
max_length=max_length,
dataset_map_fn=None,
template_map_fn=dict(
type=template_map_fn_factory, template=prompt_template),
remove_unused_columns=True,
shuffle_before_pack=True,
pack_to_max_length=pack_to_max_length,
use_varlen_attn=use_varlen_attn)
sampler = SequenceParallelSampler \
if sequence_parallel_size > 1 else DefaultSampler
train_dataloader = dict(
batch_size=batch_size,
num_workers=dataloader_num_workers,
dataset=alpaca_en,
sampler=dict(type=sampler, shuffle=True),
collate_fn=dict(type=default_collate_fn, use_varlen_attn=use_varlen_attn))
#######################################################################
# PART 4 Scheduler & Optimizer #
#######################################################################
# optimizer
optim_wrapper = dict(
type=AmpOptimWrapper,
optimizer=dict(
type=optim_type, lr=lr, betas=betas, weight_decay=weight_decay),
clip_grad=dict(max_norm=max_norm, error_if_nonfinite=False),
accumulative_counts=accumulative_counts,
loss_scale='dynamic',
dtype='float16')
# learning policy
# More information: https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/param_scheduler.md # noqa: E501
param_scheduler = [
dict(
type=LinearLR,
start_factor=1e-5,
by_epoch=True,
begin=0,
end=warmup_ratio * max_epochs,
convert_to_iter_based=True),
dict(
type=CosineAnnealingLR,
eta_min=0.0,
by_epoch=True,
begin=warmup_ratio * max_epochs,
end=max_epochs,
convert_to_iter_based=True)
]
# train, val, test setting
train_cfg = dict(type=TrainLoop, max_epochs=max_epochs)
#######################################################################
# PART 5 Runtime #
#######################################################################
# Log the dialogue periodically during the training process, optional
custom_hooks = [
dict(type=DatasetInfoHook, tokenizer=tokenizer),
dict(
type=EvaluateChatHook,
tokenizer=tokenizer,
every_n_iters=evaluation_freq,
evaluation_inputs=evaluation_inputs,
system=SYSTEM,
prompt_template=prompt_template)
]
if use_varlen_attn:
custom_hooks += [dict(type=VarlenAttnArgsToMessageHubHook)]
# configure default hooks
default_hooks = dict(
# record the time of every iteration.
timer=dict(type=IterTimerHook),
# print log every 10 iterations.
logger=dict(type=LoggerHook, log_metric_by_epoch=False, interval=10),
# enable the parameter scheduler.
param_scheduler=dict(type=ParamSchedulerHook),
# save checkpoint per `save_steps`.
checkpoint=dict(
type=CheckpointHook,
by_epoch=False,
interval=save_steps,
max_keep_ckpts=save_total_limit),
# set sampler seed in distributed evrionment.
sampler_seed=dict(type=DistSamplerSeedHook),
)
# configure environment
env_cfg = dict(
# whether to enable cudnn benchmark
cudnn_benchmark=False,
# set multi process parameters
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
# set distributed parameters
dist_cfg=dict(backend='nccl'),
)
# set visualizer
visualizer = None
# set log level
log_level = 'INFO'
# load from which checkpoint
load_from = None
# whether to resume training from the loaded checkpoint
resume = False
# Defaults to use random seed and disable `deterministic`
randomness = dict(seed=None, deterministic=False)
# set log processor
log_processor = dict(by_epoch=False)