de0674ccf7
* update rag/src/data_processing.py * Add files via upload allow user to load embedding & rerank models from cache * Add files via upload embedding_path = os.path.join(model_dir, 'embedding_model') rerank_path = os.path.join(model_dir, 'rerank_model') * 测试push dev 测试push dev * Add files via upload 两个母亲多轮对话数据集合并、清理和去重之后,得到 2439 条多轮对话数据(每条有6-8轮对话)。 * optimize deduplicate.py Add time print information save duplicate dataset as well remove print(content) * add base model qlora fintuning config file: internlm2_7b_base_qlora_e10_M_1e4_32_64.py * add full finetune code from internlm2 * other 2 configs for base model * update cli_internlm2.py three methods to load model 1. download model in openxlab 2. download model in modelscope 3. offline model * create upload_modelscope.py * add base model and update personal contributions * add README.md for Emollm_Scientist * Create README_internlm2_7b_base_qlora.md InternLM2 7B Base QLoRA 微调指南 * [DOC]EmoLLM_Scientist微调指南 * [DOC]EmoLLM_Scientist微调指南 * [DOC]EmoLLM_Scientist微调指南 * [DOC]EmoLLM_Scientist微调指南 * [DOC]EmoLLM_Scientist微调指南 * [DOC]EmoLLM_Scientist微调指南 * update * [DOC]README_scientist.md * delete config * format update * upload xlab * add README_Model_Uploading.md and images * modelscope model upload * Modify Recent Updates * update daddy-like Boy-Friend EmoLLM * update model uploading with openxlab * update model uploading with openxlab --------- Co-authored-by: zealot52099 <songyan5209@163.com> Co-authored-by: xzw <62385492+aJupyter@users.noreply.github.com> Co-authored-by: zealot52099 <67356208+zealot52099@users.noreply.github.com> Co-authored-by: Bryce Wang <90940753+brycewang2018@users.noreply.github.com> Co-authored-by: HongCheng <kwchenghong@gmail.com>
33 lines
1.4 KiB
Python
33 lines
1.4 KiB
Python
import torch
|
||
from transformers import AutoTokenizer, AutoModelForCausalLM
|
||
from openxlab.model import download
|
||
from modelscope import snapshot_download
|
||
|
||
# download model in openxlab
|
||
model_name_or_path =download(model_repo='ajupyter/EmoLLM_internlm2_7b_full',
|
||
output='EmoLLM_internlm2_7b_full')
|
||
|
||
# download model in modelscope
|
||
model_name_or_path = snapshot_download('chg0901/EmoLLM-InternLM7B-base')
|
||
|
||
# offline model
|
||
# model_name_or_path = "/root/StableCascade/emollm2/EmoLLM/xtuner_config/merged"
|
||
|
||
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
|
||
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, trust_remote_code=True, torch_dtype=torch.bfloat16, device_map='auto')
|
||
model = model.eval()
|
||
|
||
system_prompt = '你是心理健康助手EmoLLM,由EmoLLM团队打造。你旨在通过专业心理咨询,协助来访者完成心理诊断。请充分利用专业心理学知识与咨询技术,一步步帮助来访者解决心理问题。'
|
||
|
||
messages = [(system_prompt, '')]
|
||
|
||
print("=============Welcome to InternLM chatbot, type 'exit' to exit.=============")
|
||
|
||
while True:
|
||
input_text = input("User >>> ")
|
||
input_text.replace(' ', '')
|
||
if input_text == "exit":
|
||
break
|
||
response, history = model.chat(tokenizer, input_text, history=messages)
|
||
messages.append((input_text, response))
|
||
print(f"robot >>> {response}") |