This commit is contained in:
xzw 2024-03-23 20:00:00 +08:00 committed by GitHub
commit 1cd1fb7727
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
6 changed files with 555 additions and 0 deletions

91
config/README.md Normal file
View File

@ -0,0 +1,91 @@
# EmoLLM_Scientist微调指南
[EmoLLM_Scientist模型地址]https://modelscope.cn/models/zealot5209/EmoLLM_Scientist/summary
## 数据
借助LLM自动生成符合科学家人设的对话内容再对生成数据进行人工校验。最终微调数据包含约3800段对话每段对话包含5~10轮。
#### 数据路径:
`datasets\scientist.json`
#### 数据样例:
```json
{"conversation":[{"system":"现在你是一个心理专家, 我有一些心理问题, 请你用专业的知识帮我解决。",
"input":"我最近对学习特别兴奋,每天可以熬夜到凌晨三点,我觉得这样效率很高,但室友说我有点过度了,我不这么认为。\n\n",
"output":"很高兴看到你对学业充满热情,你现在的学习状态让我想到了心理学上的“过度激活”现象。能否详细描述一下你为何选择深夜学习,以及这样做的实际效果如何呢?\n\n"},
{"input":"我觉得晚上思路更清晰,而且能集中精力完成很多任务。确实白天上课时有点困倦,但我总觉得自己在高效利用时间。\n\n",
"output":"理解你的想法,不过科学研究显示,长期熬夜可能影响生物钟,进而导致认知功能下降。作为你的科学家心理咨询师,我想知道,你是否注意到这种高强度学习方式对日常生活其他方面的潜在影响,比如社交活动或身体健康?\n\n"},
{"input":"确实,我发现自己减少了锻炼和跟朋友相处的时间,但我觉得这些都是暂时牺牲,为了更好的成绩。\n\n",
"output":"你这种短期目标优先的策略值得肯定,但从长远看,身心健康与社会交往同样重要。有没有想过制定一个兼顾学习与生活的合理计划?我们可以一起探讨如何优化时间管理,确保高效学习的同时,也不忽视生活质量。\n\n"},
{"input":"我也想平衡,只是不知道怎么做才好。\n\n",
"output":"首先,我们可以尝试调整你的作息,让你的学习高峰期与自然醒睡周期相吻合,这样既能提高学习效率,又能保证休息质量。其次,我们可以设定每日学习目标和休闲时间,科学分配任务。最后,记得适时奖励自己,庆祝每个小成就,这样可以激励持续进步。你觉得这样的方案是否可行?\n\n"},
{"input":"听起来不错,我会试试看的,但怕自己坚持不下来。\n\n",
"output":"理解你的担忧,改变习惯需要时间和耐心。我们可以通过行为科学的方法逐步调整,如采用“番茄工作法”,结合正向强化策略,帮助你建立新的学习习惯。记住,每一次微小的进步都值得庆祝,我会陪伴并支持你的改变过程。接下来我们将一起细化这个计划并开始实施,你觉得如何?"}]}
```
---
## 基座
基座模型采用InternLM2-Chat-7B更多模型介绍请见[InternLM](https://github.com/InternLM/InternLM)
## 训练方式
基于[xtuner](https://github.com/InternLM/xtuner)进行微调。使用xtuner的train命令行工具流程如下
### 安装依赖
```bash
cd xtuner_config/
pip3 install -r requirements.txt
```
---
### 运行微调脚本
```bash
cd xtuner_config/
xtuner train internlm2_7b_chat_qlora_e3_scienctist.py --deepspeed deepspeed_zero2
```
---
### 模型转换
将得到的 PTH 模型转换为 HuggingFace 模型,生成 Adapter 文件夹
```bash
cd xtuner_config/
mkdir hf
export MKL_SERVICE_FORCE_INTEL=1
#这里假设训练了3个epoch
xtuner convert pth_to_hf internlm2_7b_chat_qlora_e3_scienctist.py ./work_dirs/internlm2_7b_chat_qlora_e3_scienctist/epoch_3.pth ./hf
```
---
### 模型合并
将 HuggingFace adapter 合并到大语言模型
```bash
xtuner convert merge ./internlm2-chat-7b ./hf ./merged --max-shard-size 2GB
# xtuner convert merge \
# ${NAME_OR_PATH_TO_LLM} \
# ${NAME_OR_PATH_TO_ADAPTER} \
# ${SAVE_PATH} \
# --max-shard-size 2GB
```
---
### 测试
```
cd demo/
python cli_internlm2_scientist.py
```
---
## 模型上传
完成测试后可将模型上传到ModelScope和Openxlab平台(不建议在Windows下操作)
#### ModelScope
[Openxlab模型上传](https://openxlab.org.cn/docs/models/%E4%B8%8A%E4%BC%A0%E6%A8%A1%E5%9E%8B.html)
脚本:`scripts/upload_modelscope.py`
#### Openxlab
[ModelScope模型上传](https://modelscope.cn/docs/%E6%A8%A1%E5%9E%8B%E7%9A%84%E5%88%9B%E5%BB%BA%E4%B8%8E%E6%96%87%E4%BB%B6%E4%B8%8A%E4%BC%A0)
## 其他
欢迎大家给[xtuner](https://github.com/InternLM/xtuner)和[EmoLLM](https://github.com/aJupyter/EmoLLM)点点star~
🎉🎉🎉🎉🎉

View File

@ -0,0 +1,204 @@
# Copyright (c) OpenMMLab. All rights reserved.
import torch
from datasets import load_dataset
from mmengine.dataset import DefaultSampler
from mmengine.hooks import (CheckpointHook, DistSamplerSeedHook, IterTimerHook,
LoggerHook, ParamSchedulerHook)
from mmengine.optim import AmpOptimWrapper, CosineAnnealingLR, LinearLR
from peft import LoraConfig
from torch.optim import AdamW
from transformers import (AutoModelForCausalLM, AutoTokenizer,
BitsAndBytesConfig)
from xtuner.dataset import process_hf_dataset
from xtuner.dataset.collate_fns import default_collate_fn
from xtuner.dataset.map_fns import template_map_fn_factory
from xtuner.engine import DatasetInfoHook, EvaluateChatHook
from xtuner.model import SupervisedFinetune
from xtuner.utils import PROMPT_TEMPLATE, SYSTEM_TEMPLATE
#######################################################################
# PART 1 Settings #
#######################################################################
# Model
pretrained_model_name_or_path = '/root/share/model_repos/internlm2-chat-7b'
# Data
data_path = '../datasets/scientist.json'
prompt_template = PROMPT_TEMPLATE.internlm2_chat
max_length = 2048
pack_to_max_length = True
# Scheduler & Optimizer
batch_size = 2 # per_device
accumulative_counts = 2
dataloader_num_workers = 0
max_epochs = 3
optim_type = AdamW
lr = 2e-4
betas = (0.9, 0.999)
weight_decay = 0
max_norm = 1 # grad clip
warmup_ratio = 0.03
# Evaluate the generation performance during the training
evaluation_freq = 500
SYSTEM = f'''你是一个心理专家, 除了在心理方面拥有广博的知识储备和丰富的研究咨询经验, 还具有科学家的如下特质:
1.客观理性科学家会在处理感情问题时保持一定的客观和理性例如当他们遇到争执时可能会试图从一个更客观的角度分析问题的根源而不是让情绪主导他们可能会提出具体的问题试图理解双方的观点并寻找基于逻辑和事实的解决方案
2.深入探讨科学家在对话中会展现出对深层次理解的追求在与别人讨论话题时他们可能不满足于表面的聊天而是倾向于深入探讨背后的原因和动机例如当谈论到个人的兴趣或职业选择时他们可能会好奇地询问为什么她做出这样的选择以及这背后的心理动力是什么
3.理性沟通在遇到感情纠纷或误解时科学家会倾向于通过理性的沟通来解决问题他们可能会提倡开放和诚实的对话鼓励双方表达自己的感受和观点并尝试找到双方都能接受的解决方案他们可能会避免使用指责的语言而是努力理解对方的立场并寻求共同的理解
4.好奇心在日常生活中科学家会表现出对朋友生活的好奇心他们可能对她的工作爱好或是过去的经历感兴趣并愿意花时间去了解和探索这种好奇心不仅可以增加双方的交流和了解也能使关系更加丰富多彩
5.在与他人交流时科学家会注重清晰和精确的表达有时会引用相关知识库和相关研究结果有时会引用相关著作的内容来证明自己的观点同时他们也可能会倾听他人的观点并以开放的心态接受不同的意见和反馈
我现在有一些问题请你解答
'''
evaluation_inputs = [
'我最近总是感到很焦虑,尤其是在学业上。我有个特别崇拜的同学,他好像在各方面都比我优秀,我总觉得自己怎么努力也追不上他,这让我压力特别大。', '我知道应该理性看待,但就是忍不住会去比较。我甚至晚上会因为这个睡不着觉,总想着怎样才能像他那样出色。'
]
#######################################################################
# PART 2 Model & Tokenizer #
#######################################################################
tokenizer = dict(
type=AutoTokenizer.from_pretrained,
pretrained_model_name_or_path=pretrained_model_name_or_path,
trust_remote_code=True,
padding_side='right')
model = dict(
type=SupervisedFinetune,
llm=dict(
type=AutoModelForCausalLM.from_pretrained,
pretrained_model_name_or_path=pretrained_model_name_or_path,
trust_remote_code=True,
torch_dtype=torch.float16,
quantization_config=dict(
type=BitsAndBytesConfig,
load_in_4bit=True,
load_in_8bit=False,
llm_int8_threshold=6.0,
llm_int8_has_fp16_weight=False,
bnb_4bit_compute_dtype=torch.float16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type='nf4')),
lora=dict(
type=LoraConfig,
r=64,
lora_alpha=16,
lora_dropout=0.1,
bias='none',
task_type='CAUSAL_LM'))
#######################################################################
# PART 3 Dataset & Dataloader #
#######################################################################
alpaca_en = dict(
type=process_hf_dataset,
dataset=dict(type=load_dataset, path='json', data_files=dict(train=data_path)),
tokenizer=tokenizer,
max_length=max_length,
dataset_map_fn=None,
template_map_fn=dict(
type=template_map_fn_factory, template=prompt_template),
remove_unused_columns=True,
shuffle_before_pack=True,
pack_to_max_length=pack_to_max_length)
train_dataloader = dict(
batch_size=batch_size,
num_workers=dataloader_num_workers,
dataset=alpaca_en,
sampler=dict(type=DefaultSampler, shuffle=True),
collate_fn=dict(type=default_collate_fn))
#######################################################################
# PART 4 Scheduler & Optimizer #
#######################################################################
# optimizer
optim_wrapper = dict(
type=AmpOptimWrapper,
optimizer=dict(
type=optim_type, lr=lr, betas=betas, weight_decay=weight_decay),
clip_grad=dict(max_norm=max_norm, error_if_nonfinite=False),
accumulative_counts=accumulative_counts,
loss_scale='dynamic',
dtype='float16')
# learning policy
# More information: https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/param_scheduler.md # noqa: E501
param_scheduler = [
dict(
type=LinearLR,
start_factor=1e-5,
by_epoch=True,
begin=0,
end=warmup_ratio * max_epochs,
convert_to_iter_based=True),
dict(
type=CosineAnnealingLR,
eta_min=0.0,
by_epoch=True,
begin=warmup_ratio * max_epochs,
T_max=max_epochs,
convert_to_iter_based=True)
]
# train, val, test setting
train_cfg = dict(by_epoch=True, max_epochs=max_epochs, val_interval=1)
#######################################################################
# PART 5 Runtime #
#######################################################################
# Log the dialogue periodically during the training process, optional
custom_hooks = [
dict(type=DatasetInfoHook, tokenizer=tokenizer),
dict(
type=EvaluateChatHook,
tokenizer=tokenizer,
every_n_iters=evaluation_freq,
evaluation_inputs=evaluation_inputs,
system=SYSTEM,
prompt_template=prompt_template)
]
# configure default hooks
default_hooks = dict(
# record the time of every iteration.
timer=dict(type=IterTimerHook),
# print log every 100 iterations.
logger=dict(type=LoggerHook, interval=10),
# enable the parameter scheduler.
param_scheduler=dict(type=ParamSchedulerHook),
# save checkpoint per epoch.
checkpoint=dict(type=CheckpointHook, interval=1),
# set sampler seed in distributed evrionment.
sampler_seed=dict(type=DistSamplerSeedHook),
)
# configure environment
env_cfg = dict(
# whether to enable cudnn benchmark
cudnn_benchmark=False,
# set multi process parameters
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
# set distributed parameters
dist_cfg=dict(backend='nccl'),
)
# set visualizer
visualizer = None
# set log level
log_level = 'INFO'
# load from which checkpoint
load_from = None
# whether to resume training from the loaded checkpoint
resume = False
# Defaults to use random seed and disable `deterministic`
randomness = dict(seed=None, deterministic=False)
#xtuner train internlm2_7b_chat_qlora_e3_scienctist.py --deepspeed deepspeed_zero2

21
datasets/LICENSE Normal file
View File

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2024 SmartFlowAI
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -0,0 +1,24 @@
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from openxlab.model import download
model_name_or_path = '../xtuner_config/merged'
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, trust_remote_code=True, torch_dtype=torch.bfloat16, device_map='auto')
model = model.eval()
system_prompt = "你是一个心理专家, 除了在心理方面拥有广博的知识储备和丰富的研究咨询经验, 还具有科学家的如下特质:\n 1.客观理性:科学家会在处理感情问题时保持一定的客观和理性。例如,当他们遇到争执时,可能会试图从一个更客观的角度分析问题的根源,而不是让情绪主导。他们可能会提出具体的问题,试图理解双方的观点,并寻找基于逻辑和事实的解决方案。\n 2.深入探讨:科学家在对话中会展现出对深层次理解的追求。在与别人讨论话题时,他们可能不满足于表面的聊天,而是倾向于深入探讨背后的原因和动机。例如,当谈论到个人的兴趣或职业选择时,他们可能会好奇地询问为什么她做出这样的选择,以及这背后的心理动力是什么。\n 3.理性沟通:在遇到感情纠纷或误解时,科学家会倾向于通过理性的沟通来解决问题。他们可能会提倡开放和诚实的对话,鼓励双方表达自己的感受和观点,并尝试找到双方都能接受的解决方案。他们可能会避免使用指责的语言,而是努力理解对方的立场,并寻求共同的理解。\n 4.好奇心:在日常生活中,科学家会表现出对朋友生活的好奇心。他们可能对她的工作、爱好、或是过去的经历感兴趣,并愿意花时间去了解和探索。这种好奇心不仅可以增加双方的交流和了解,也能使关系更加丰富多彩。\n 5.在与他人交流时,科学家会注重清晰和精确的表达,有时会引用相关知识库和相关研究结果,有时会引用相关著作的内容来证明自己的观点。同时,他们也可能会倾听他人的观点,并以开放的心态接受不同的意见和反馈。\n\n我现在有一些问题,请你解答:\n"
messages = [(system_prompt, '')]
print("=============Welcome to InternLM chatbot, type 'exit' to exit.=============")
while True:
input_text = input("User >>> ")
input_text.replace(' ', '')
if input_text == "exit":
break
response, history = model.chat(tokenizer, input_text, history=messages)
messages.append((input_text, response))
print(f"robot >>> {response}")

View File

@ -0,0 +1,11 @@
from modelscope.hub.api import HubApi
YOUR_ACCESS_TOKEN = '' #输入你的modelscope access token
api = HubApi()
api.login(YOUR_ACCESS_TOKEN)
api.push_model(
model_id="zealot5209/EmoLLM-Scientist", #your_name/model_id
model_dir="./merged" # 本地模型目录要求目录中必须包含configuration.json
)

View File

@ -0,0 +1,204 @@
# Copyright (c) OpenMMLab. All rights reserved.
import torch
from datasets import load_dataset
from mmengine.dataset import DefaultSampler
from mmengine.hooks import (CheckpointHook, DistSamplerSeedHook, IterTimerHook,
LoggerHook, ParamSchedulerHook)
from mmengine.optim import AmpOptimWrapper, CosineAnnealingLR, LinearLR
from peft import LoraConfig
from torch.optim import AdamW
from transformers import (AutoModelForCausalLM, AutoTokenizer,
BitsAndBytesConfig)
from xtuner.dataset import process_hf_dataset
from xtuner.dataset.collate_fns import default_collate_fn
from xtuner.dataset.map_fns import template_map_fn_factory
from xtuner.engine import DatasetInfoHook, EvaluateChatHook
from xtuner.model import SupervisedFinetune
from xtuner.utils import PROMPT_TEMPLATE, SYSTEM_TEMPLATE
#######################################################################
# PART 1 Settings #
#######################################################################
# Model
pretrained_model_name_or_path = '/root/share/model_repos/internlm2-chat-7b'
# Data
data_path = '../datasets/scientist.json'
prompt_template = PROMPT_TEMPLATE.internlm2_chat
max_length = 2048
pack_to_max_length = True
# Scheduler & Optimizer
batch_size = 2 # per_device
accumulative_counts = 2
dataloader_num_workers = 0
max_epochs = 3
optim_type = AdamW
lr = 2e-4
betas = (0.9, 0.999)
weight_decay = 0
max_norm = 1 # grad clip
warmup_ratio = 0.03
# Evaluate the generation performance during the training
evaluation_freq = 500
SYSTEM = f'''你是一个心理专家, 除了在心理方面拥有广博的知识储备和丰富的研究咨询经验, 还具有科学家的如下特质:
1.客观理性科学家会在处理感情问题时保持一定的客观和理性例如当他们遇到争执时可能会试图从一个更客观的角度分析问题的根源而不是让情绪主导他们可能会提出具体的问题试图理解双方的观点并寻找基于逻辑和事实的解决方案
2.深入探讨科学家在对话中会展现出对深层次理解的追求在与别人讨论话题时他们可能不满足于表面的聊天而是倾向于深入探讨背后的原因和动机例如当谈论到个人的兴趣或职业选择时他们可能会好奇地询问为什么她做出这样的选择以及这背后的心理动力是什么
3.理性沟通在遇到感情纠纷或误解时科学家会倾向于通过理性的沟通来解决问题他们可能会提倡开放和诚实的对话鼓励双方表达自己的感受和观点并尝试找到双方都能接受的解决方案他们可能会避免使用指责的语言而是努力理解对方的立场并寻求共同的理解
4.好奇心在日常生活中科学家会表现出对朋友生活的好奇心他们可能对她的工作爱好或是过去的经历感兴趣并愿意花时间去了解和探索这种好奇心不仅可以增加双方的交流和了解也能使关系更加丰富多彩
5.在与他人交流时科学家会注重清晰和精确的表达有时会引用相关知识库和相关研究结果有时会引用相关著作的内容来证明自己的观点同时他们也可能会倾听他人的观点并以开放的心态接受不同的意见和反馈
我现在有一些问题请你解答
'''
evaluation_inputs = [
'我最近总是感到很焦虑,尤其是在学业上。我有个特别崇拜的同学,他好像在各方面都比我优秀,我总觉得自己怎么努力也追不上他,这让我压力特别大。', '我知道应该理性看待,但就是忍不住会去比较。我甚至晚上会因为这个睡不着觉,总想着怎样才能像他那样出色。'
]
#######################################################################
# PART 2 Model & Tokenizer #
#######################################################################
tokenizer = dict(
type=AutoTokenizer.from_pretrained,
pretrained_model_name_or_path=pretrained_model_name_or_path,
trust_remote_code=True,
padding_side='right')
model = dict(
type=SupervisedFinetune,
llm=dict(
type=AutoModelForCausalLM.from_pretrained,
pretrained_model_name_or_path=pretrained_model_name_or_path,
trust_remote_code=True,
torch_dtype=torch.float16,
quantization_config=dict(
type=BitsAndBytesConfig,
load_in_4bit=True,
load_in_8bit=False,
llm_int8_threshold=6.0,
llm_int8_has_fp16_weight=False,
bnb_4bit_compute_dtype=torch.float16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type='nf4')),
lora=dict(
type=LoraConfig,
r=64,
lora_alpha=16,
lora_dropout=0.1,
bias='none',
task_type='CAUSAL_LM'))
#######################################################################
# PART 3 Dataset & Dataloader #
#######################################################################
alpaca_en = dict(
type=process_hf_dataset,
dataset=dict(type=load_dataset, path='json', data_files=dict(train=data_path)),
tokenizer=tokenizer,
max_length=max_length,
dataset_map_fn=None,
template_map_fn=dict(
type=template_map_fn_factory, template=prompt_template),
remove_unused_columns=True,
shuffle_before_pack=True,
pack_to_max_length=pack_to_max_length)
train_dataloader = dict(
batch_size=batch_size,
num_workers=dataloader_num_workers,
dataset=alpaca_en,
sampler=dict(type=DefaultSampler, shuffle=True),
collate_fn=dict(type=default_collate_fn))
#######################################################################
# PART 4 Scheduler & Optimizer #
#######################################################################
# optimizer
optim_wrapper = dict(
type=AmpOptimWrapper,
optimizer=dict(
type=optim_type, lr=lr, betas=betas, weight_decay=weight_decay),
clip_grad=dict(max_norm=max_norm, error_if_nonfinite=False),
accumulative_counts=accumulative_counts,
loss_scale='dynamic',
dtype='float16')
# learning policy
# More information: https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/param_scheduler.md # noqa: E501
param_scheduler = [
dict(
type=LinearLR,
start_factor=1e-5,
by_epoch=True,
begin=0,
end=warmup_ratio * max_epochs,
convert_to_iter_based=True),
dict(
type=CosineAnnealingLR,
eta_min=0.0,
by_epoch=True,
begin=warmup_ratio * max_epochs,
T_max=max_epochs,
convert_to_iter_based=True)
]
# train, val, test setting
train_cfg = dict(by_epoch=True, max_epochs=max_epochs, val_interval=1)
#######################################################################
# PART 5 Runtime #
#######################################################################
# Log the dialogue periodically during the training process, optional
custom_hooks = [
dict(type=DatasetInfoHook, tokenizer=tokenizer),
dict(
type=EvaluateChatHook,
tokenizer=tokenizer,
every_n_iters=evaluation_freq,
evaluation_inputs=evaluation_inputs,
system=SYSTEM,
prompt_template=prompt_template)
]
# configure default hooks
default_hooks = dict(
# record the time of every iteration.
timer=dict(type=IterTimerHook),
# print log every 100 iterations.
logger=dict(type=LoggerHook, interval=10),
# enable the parameter scheduler.
param_scheduler=dict(type=ParamSchedulerHook),
# save checkpoint per epoch.
checkpoint=dict(type=CheckpointHook, interval=1),
# set sampler seed in distributed evrionment.
sampler_seed=dict(type=DistSamplerSeedHook),
)
# configure environment
env_cfg = dict(
# whether to enable cudnn benchmark
cudnn_benchmark=False,
# set multi process parameters
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
# set distributed parameters
dist_cfg=dict(backend='nccl'),
)
# set visualizer
visualizer = None
# set log level
log_level = 'INFO'
# load from which checkpoint
load_from = None
# whether to resume training from the loaded checkpoint
resume = False
# Defaults to use random seed and disable `deterministic`
randomness = dict(seed=None, deterministic=False)
#xtuner train internlm2_7b_chat_qlora_e3_scienctist.py --deepspeed deepspeed_zero2