OliveSensorAPI/xtuner_config/README_EN.md
HongCheng 6cd83b6b0e README files translation
The English version README files of the following documents are created and translated.

1. demo/README.md
2. evaluate/README.md
3. xtuner_config/README.md
4. xtuner_config/images/README.md
5. xtuner_config/ChatGLM3-6b-ft.md

There are some format problem and  language expression in the Chinese version, I also adapted them.

By the way, I modified the file name of `evaluate/General evaluation.md` and `evaluate/Professional evaluation.md` since they are shown in the `xtuner_config/README.md`
2024-03-03 19:24:55 +09:00

1.6 KiB

Fine-Tuning Guide

  • This project has undergone fine-tuning not only on mental health datasets but also on self-awareness, and here is the detailed guide for fine-tuning.

I. Fine-Tuning Based on Xtuner 🎉🎉🎉🎉🎉

Environment Setup

datasets==2.16.1
deepspeed==0.13.1
einops==0.7.0
flash_attn==2.5.0
mmengine==0.10.2
openxlab==0.0.34
peft==0.7.1
sentencepiece==0.1.99
torch==2.1.2
transformers==4.36.2
xtuner==0.1.11

You can also install them all at once by

cd xtuner_config/
pip3 install -r requirements.txt

Fine-Tuning

cd xtuner_config/
xtuner train internlm2_7b_chat_qlora_e3.py --deepspeed deepspeed_zero2

Convert the Obtained PTH Model to a HuggingFace Model

That is: Generate the Adapter folder

cd xtuner_config/
mkdir hf
export MKL_SERVICE_FORCE_INTEL=1

xtuner convert pth_to_hf internlm2_7b_chat_qlora_e3.py ./work_dirs/internlm_chat_7b_qlora_oasst1_e3_copy/epoch_3.pth ./hf

Merge the HuggingFace Adapter with the Large Language Model

xtuner convert merge ./internlm2-chat-7b ./hf ./merged --max-shard-size 2GB
# xtuner convert merge \
#     ${NAME_OR_PATH_TO_LLM} \
#     ${NAME_OR_PATH_TO_ADAPTER} \
#     ${SAVE_PATH} \
#     --max-shard-size 2GB

Testing

cd demo/
python cli_internlm2.py

II. Fine-Tuning Based on Transformers🎉🎉🎉🎉🎉


Other

Feel free to give xtuner and EmoLLM a star~

🎉🎉🎉🎉🎉