OliveSensorAPI/xtuner_config
Anooyman de0674ccf7
Update main code (#2)
* update rag/src/data_processing.py

* Add files via upload

allow user to load embedding & rerank models from cache

* Add files via upload

embedding_path = os.path.join(model_dir, 'embedding_model')  
rerank_path = os.path.join(model_dir, 'rerank_model')

* 测试push dev

测试push dev

* Add files via upload

两个母亲多轮对话数据集合并、清理和去重之后,得到 2439 条多轮对话数据(每条有6-8轮对话)。

* optimize deduplicate.py

Add time print information
save duplicate dataset as well
remove print(content)

* add base model qlora fintuning config file: internlm2_7b_base_qlora_e10_M_1e4_32_64.py

* add full finetune code from internlm2

* other 2 configs for base model

* update cli_internlm2.py

 three methods to load model

1. download model in openxlab
2. download model in modelscope
3. offline model

* create upload_modelscope.py

* add base model and update personal contributions

* add README.md for Emollm_Scientist

* Create README_internlm2_7b_base_qlora.md

InternLM2 7B Base QLoRA 微调指南

* [DOC]EmoLLM_Scientist微调指南

* [DOC]EmoLLM_Scientist微调指南

* [DOC]EmoLLM_Scientist微调指南

* [DOC]EmoLLM_Scientist微调指南

* [DOC]EmoLLM_Scientist微调指南

* [DOC]EmoLLM_Scientist微调指南

* update

* [DOC]README_scientist.md

* delete config

* format update

* upload xlab

* add README_Model_Uploading.md and images

* modelscope model upload

* Modify Recent Updates

* update daddy-like Boy-Friend EmoLLM

* update model uploading with openxlab

* update model uploading with openxlab

---------

Co-authored-by: zealot52099 <songyan5209@163.com>
Co-authored-by: xzw <62385492+aJupyter@users.noreply.github.com>
Co-authored-by: zealot52099 <67356208+zealot52099@users.noreply.github.com>
Co-authored-by: Bryce Wang <90940753+brycewang2018@users.noreply.github.com>
Co-authored-by: HongCheng <kwchenghong@gmail.com>
2024-03-24 11:51:19 +08:00
..
images solve conflict in readme files 2024-03-03 19:56:51 +09:00
airen-internlm2_chat_7b_qlora.py Add files via upload 2024-02-29 11:58:55 +08:00
aiwei-internlm2_chat_7b_qlora.py feat: Update Aiwei configuration. 2024-02-23 20:11:17 +08:00
baichuan2_13b_chat_qlora_alpaca_e3.py feat:Add new finetune configurations and datasets 2024-02-23 11:36:58 +08:00
chatglm3_6b_lora_alpaca_e3.py feat:Add new finetune configurations and datasets 2024-02-23 11:36:58 +08:00
ChatGLM3-6b-ft_EN.md Update ChatGLM3-6b-ft_EN.md 2024-03-16 20:21:20 +09:00
ChatGLM3-6b-ft.md GLM-6B ft 2024-03-19 18:03:26 +08:00
deepseek_moe_16b_chat_qlora_oasst1_e3.py feat:Add new finetune configurations and datasets 2024-02-23 11:36:58 +08:00
internlm2_1_8b_full_alpaca_e3.py feat:Add new finetune configurations and datasets 2024-02-23 11:36:58 +08:00
internlm2_7b_base_qlora_e3_M_1e4_32_64.py Update main code (#2) 2024-03-24 11:51:19 +08:00
internlm2_7b_base_qlora_e3.py add internlm2_7b_base_qlora_e3.py and modify requirements.txt 2024-03-21 15:55:50 +09:00
internlm2_7b_base_qlora_e10_b8_16_32.py Update main code (#2) 2024-03-24 11:51:19 +08:00
internlm2_7b_base_qlora_e10_M_1e4_32_64.py Update main code (#2) 2024-03-24 11:51:19 +08:00
internlm2_7b_chat_qlora_e3_scienctist.py Update main code (#2) 2024-03-24 11:51:19 +08:00
internlm2_7b_chat_qlora_e3.py feat: add web_internlm2 and upload s.t. scripts 2024-01-25 19:02:24 +08:00
internlm2_chat_7b_full_finetune_custom_dataset_e1.py Update main code (#2) 2024-03-24 11:51:19 +08:00
internlm2_chat_7b_full.py feat: add internlm2-chat-7b-config 2024-03-03 21:08:52 +08:00
mixtral_8x7b_instruct_qlora_oasst1_e3.py feat:Add new finetune configurations and datasets 2024-02-23 11:36:58 +08:00
qwen1_5_0_5_B_full.py feat:Add new finetune configurations and datasets 2024-02-23 11:36:58 +08:00
qwen_7b_chat_qlora_e3.py feat: add web_internlm2 and upload s.t. scripts 2024-01-25 19:02:24 +08:00
README_EN.md README files translation 2024-03-03 19:24:55 +09:00
README_internlm2_7b_base_qlora.md Update main code (#2) 2024-03-24 11:51:19 +08:00
README_scientist.md Update main code (#2) 2024-03-24 11:51:19 +08:00
README.md docs:add finetune doc and update readme 2024-01-26 22:25:42 +08:00
requirements.txt add internlm2_7b_base_qlora_e3.py and modify requirements.txt 2024-03-21 15:55:50 +09:00
upload_modelscope.py Update main code (#2) 2024-03-24 11:51:19 +08:00

Fine-Tuning Guide

  • This project has undergone fine-tuning not only on mental health datasets but also on self-awareness, and here is the detailed guide for fine-tuning.

I. Fine-Tuning Based on Xtuner 🎉🎉🎉🎉🎉

Environment Setup

datasets==2.16.1
deepspeed==0.13.1
einops==0.7.0
flash_attn==2.5.0
mmengine==0.10.2
openxlab==0.0.34
peft==0.7.1
sentencepiece==0.1.99
torch==2.1.2
transformers==4.36.2
xtuner==0.1.11

You can also install them all at once by

cd xtuner_config/
pip3 install -r requirements.txt

Fine-Tuning

cd xtuner_config/
xtuner train internlm2_7b_chat_qlora_e3.py --deepspeed deepspeed_zero2

Convert the Obtained PTH Model to a HuggingFace Model

That is: Generate the Adapter folder

cd xtuner_config/
mkdir hf
export MKL_SERVICE_FORCE_INTEL=1

xtuner convert pth_to_hf internlm2_7b_chat_qlora_e3.py ./work_dirs/internlm_chat_7b_qlora_oasst1_e3_copy/epoch_3.pth ./hf

Merge the HuggingFace Adapter with the Large Language Model

xtuner convert merge ./internlm2-chat-7b ./hf ./merged --max-shard-size 2GB
# xtuner convert merge \
#     ${NAME_OR_PATH_TO_LLM} \
#     ${NAME_OR_PATH_TO_ADAPTER} \
#     ${SAVE_PATH} \
#     --max-shard-size 2GB

Testing

cd demo/
python cli_internlm2.py

II. Fine-Tuning Based on Transformers🎉🎉🎉🎉🎉


Other

Feel free to give xtuner and EmoLLM a star~

🎉🎉🎉🎉🎉