diff --git a/generate_data/tutorial_EN.md b/generate_data/tutorial_EN.md index fbdb05e..2f26544 100644 --- a/generate_data/tutorial_EN.md +++ b/generate_data/tutorial_EN.md @@ -1,61 +1,65 @@ -# EMO Psychological large model fine-tuning data generation tutorial +# EmoLLM fine-tuning data generation tutorial **I. Objectives and Background** - In order to have a better representation of our large mental models, we must have high quality data sets. To achieve this goal, we decided to use four powerful AI grand models: Wenxin Yiyi, Tongyi Qianwen, Feifei Spark, and NXP AI to generate conversation data. In addition, we will enhance the cognitive depth of the dataset and improve the generalization ability of the model by adding a small number of self-cognitive datasets. +In order to have a better representation of our large mental models, we must have high quality datasets. To achieve this goal, we decided to use four powerful AI grand models: **Wenxin Yiyan**, **Tongyi Qianwen**, **Feifei Spark**, and **Zhipu GLM** to generate conversation data. In addition, we will enhance the cognitive depth of the dataset and improve the generalization ability of the model by adding a small number of self-cognitive datasets. -**II. Data set generation method** +**II. dataset generation method** 1. **Model selection and data preparation** - Choose four big language models, namely Wenxin Yiyi, Tongyi Qianwen, IFei Spark and Zhipu, obtain the API to call the corresponding interface, and prepare to generate dialogue data. -2. **Single round and multiple round dialogue data generation ** + Choose four big language models, namely Wenxin Yiyan, Tongyi Qianwen, IFei Spark and Zhipu GLM, obtain the API to call the corresponding interface, and prepare to generate dialogue data. + +3. **Single-turn and multi-turn dialogue data generation** - Using these four models, we generated 10,000 single - and multi-round conversation data. In doing so, we ensure the diversity, complexity and validity of our data. + Using these four models, we generated 10,000 single and multi-turn conversation data. In doing so, we ensure the diversity, complexity and validity of our data. - Because mental activity is often complex, in order to ensure the diversity of data. We selected a total of 16 * 28 `448` scenarios for data set generation. For specific scenario names, please refer to the configuration of the two parameters`emotions_list and areas_of_life`in config.yml. -3. **Inclusion of self-perception datasets** + Because mental activity is often complex, in order to ensure the diversity of data. We selected a total of 16 * 28 `448` scenarios for dataset generation. For specific scenario names, please refer to the configuration of the two parameters`emotions_list and areas_of_life`in config.yml. + +4. **Inclusion of self-perception datasets** - In order to enhance the cognitive ability of the model, we specially added a part of self-cognitive data set. These data sets help the model better understand the context and improve the naturalness and coherence of the conversation. + In order to enhance the cognitive ability of the model, we specially added a part of self-cognitive dataset. These datasets help the model better understand the context and improve the naturalness and coherence of the conversation. **III. Practical steps** 1. **Initialize** -* Install the required software and libraries. +* Install the required software and libraries ```bash pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple ``` -* Prepare input data and configuration parameters. + +* Prepare input data and configuration parameters See `config.yml` for annotations 2. **Model selection and configuration** -* Select the right model for your needs. +* Select the right model for your needs In order to enable everyone to play with the large model, we chose the InterLLM2-7B as our baseline model (consumer graphics cards can also be deployed fine-tuned oh). -* Make necessary configuration and adjustments to the model. - Use XTuner for fine-tuning based on our data set and configuration strategy + +* Make necessary configurations and adjustments to the model + Use XTuner for fine-tuning based on our dataset and configuration strategy. 3. **Data generation** -* Data generation using Tongyi Qianwen large model. +* Data generation using Tongyi Qianwen ```bash # Terminal operation bash run_qwen.bash ``` -* Use Baidu Wenxin large model for data generation. +* Data generation using Wenxin Yiyan ```bash # Terminal operation python ernie_gen_data.py ``` -* Data generation using the NXP AI large model. +* Data generation using Zhipu GLM ```bash # Terminal operation python zhipuai_gen_data.py ``` -* Use IFlystar Fire model for data generation. +* Data generation using IFlystar Fire ```bash # Terminal operation python ./xinghuo/gen_data.py @@ -63,7 +67,7 @@ 4. **Integration of self-cognition datasets** -* Self-cognition data set this needs to be manually generated in accordance with the format, the following format can be. +* Self-cognition dataset this needs to be manually generated in accordance with the format, the following format can be ```json [ { @@ -85,16 +89,18 @@ ] ``` -5. **Data set integration.** +5. **dataset integration** + +Before dataset integration, we need to check whether the generated data has formatting errors, type mismatches, etc. We need check.py to check the data. Finally, merge_json.py is used to combine all the json into one overall json file. - Before data set integration, we need to check whether the generated data has formatting errors, type mismatches, etc. We need check.py to check the data. Finally, merge_json.py is used to combine all the json into one overall json file. 6. **Evaluation and optimization** -* Evaluate the generated dataset using appropriate evaluation metrics. -* Make necessary optimizations and adjustments based on the evaluation results. +* Evaluate the generated dataset using appropriate evaluation metrics +* Make necessary optimizations and adjustments based on the evaluation results 7. **Testing and deployment** -* Evaluate the trained model using an independent test set. -* Make necessary adjustments and optimizations based on test results. -* Deploy the final model into a real application. +* Evaluate the trained model using an independent test set +* Make necessary adjustments and optimizations based on test results +* Deploy the final model into a real application +*