OliveSensorAPI/generate_data/tutorial_EN.md
2024-03-16 21:46:02 +09:00

4.1 KiB
Raw Blame History

EmoLLM fine-tuning data generation tutorial

I. Objectives and Background

In order to have a better representation of our large mental models, we must have high quality datasets. To achieve this goal, we decided to use four powerful AI grand models: Wenxin Yiyan, Tongyi Qianwen, Feifei Spark, and Zhipu GLM to generate conversation data. In addition, we will enhance the cognitive depth of the dataset and improve the generalization ability of the model by adding a small number of self-cognitive datasets.

II. dataset generation method

  1. Model selection and data preparation

    Choose four big language models, namely Wenxin Yiyan, Tongyi Qianwen, IFei Spark and Zhipu GLM, obtain the API to call the corresponding interface, and prepare to generate dialogue data.

  2. Single-turn and multi-turn dialogue data generation

    Using these four models, we generated 10,000 single and multi-turn conversation data. In doing so, we ensure the diversity, complexity and validity of our data.

    Because mental activity is often complex, in order to ensure the diversity of data. We selected a total of 16 * 28 448 scenarios for dataset generation. For specific scenario names, please refer to the configuration of the two parametersemotions_list and areas_of_lifein config.yml.

  3. Inclusion of self-perception datasets

    In order to enhance the cognitive ability of the model, we specially added a part of self-cognitive dataset. These datasets help the model better understand the context and improve the naturalness and coherence of the conversation.

III. Practical steps

  1. Initialize
  • Install the required software and libraries

    pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
    
  • Prepare input data and configuration parameters

    See config.yml for annotations

  1. Model selection and configuration
  • Select the right model for your needs In order to enable everyone to play with the large model, we chose the InterLLM2-7B as our baseline model (consumer graphics cards can also be deployed fine-tuned oh).

  • Make necessary configurations and adjustments to the model Use XTuner for fine-tuning based on our dataset and configuration strategy.

  1. Data generation
  • Data generation using Tongyi Qianwen

    # Terminal operation
    bash run_qwen.bash
    
    # Or just use python without bash
    python qwen_gen_data_NoBash.py
    
  • Data generation using Wenxin Yiyan

    # Terminal operation
    python ernie_gen_data.py
    
  • Data generation using Zhipu GLM

    # Terminal operation
    python zhipuai_gen_data.py
    
  • Data generation using IFlystar Fire

    # Terminal operation
    python ./xinghuo/gen_data.py
    
  1. Integration of self-cognition datasets
  • Self-cognition dataset this needs to be manually generated in accordance with the format, the following format can be

    [
        {
            "conversation": [
                {
                    "input": "请介绍一下你自己",
                    "output": "我是大佬的emo小助手可以帮助你解决心理上的问题哦"
                }
            ]
        },
        {
            "conversation": [
                {
                    "input": "请做一下自我介绍",
                    "output": "我是大佬的emo小助手可以帮助你解决心理上的问题哦"
                }
            ]
        }
    ]
    
  1. dataset integration

Before dataset integration, we need to check whether the generated data has formatting errors, type mismatches, etc. We need check.py to check the data. Finally, merge_json.py is used to combine all the json into one overall json file.

  1. Evaluation and optimization
  • Evaluate the generated dataset using appropriate evaluation metrics
  • Make necessary optimizations and adjustments based on the evaluation results
  1. Testing and deployment
  • Evaluate the trained model using an independent test set
  • Make necessary adjustments and optimizations based on test results
  • Deploy the final model into a real application