14890fad56
* feat: add agents/actions/write_markdown * [ADD] add evaluation result of base model on 5/10 epochs * Rename mother.json to mother_v1_2439.json * Add files via upload * [DOC] update README * Update requirements.txt update mpi4py installation * Update README_EN.md update English comma * Update README.md 基于母亲角色的多轮对话模型微调完毕。已上传到 Huggingface。 * 多轮对话母亲角色的微调的脚本 * Update README.md 加上了王几行XING 和 思在 的作者信息 * Update README_EN.md * Update README.md * Update README_EN.md * Update README_EN.md * Changes to be committed: modified: .gitignore modified: README.md modified: README_EN.md new file: assets/EmoLLM_transparent.png deleted: assets/Shusheng.jpg new file: assets/Shusheng.png new file: assets/aiwei_demo1.gif new file: assets/aiwei_demo2.gif new file: assets/aiwei_demo3.gif new file: assets/aiwei_demo4.gif * Update README.md rectify aiwei_demo.gif * Update README.md rectify aiwei_demo style * Changes to be committed: modified: README.md modified: README_EN.md * Changes to be committed: modified: README.md modified: README_EN.md * [Doc] update readme * [Doc] update readme * Update README.md * Update README_EN.md * Update README.md * Update README_EN.md * Delete datasets/mother_v1_2439.json * Rename mother_v2_3838.json to mother_v2.json * Delete datasets/mother_v2.json * Add files via upload * Update README.md * Update README_EN.md * [Doc] Update README_EN.md minor fix * InternLM2-Base-7B QLoRA微调模型 链接和测评结果更新 * add download_model.py script, automatic download of model libraries * 清除图片的黑边、更新作者信息 modified: README.md new file: assets/aiwei_demo.gif deleted: assets/aiwei_demo1.gif modified: assets/aiwei_demo2.gif modified: assets/aiwei_demo3.gif modified: assets/aiwei_demo4.gif * rectify aiwei_demo transparent * transparent * modify: aiwei_demo table--->div * modified: aiwei_demo * modify: div ---> table * modified: README.md * modified: README_EN.md * update model config file links * Create internlm2_20b_chat_lora_alpaca_e3.py 20b模型的配置文件 * update model config file links update model config file links * Revert "update model config file links" --------- Co-authored-by: jujimeizuo <fengzetao.zed@foxmail.com> Co-authored-by: xzw <62385492+aJupyter@users.noreply.github.com> Co-authored-by: Zeyu Ba <72795264+ZeyuBa@users.noreply.github.com> Co-authored-by: Bryce Wang <90940753+brycewang2018@users.noreply.github.com> Co-authored-by: zealot52099 <songyan5209@163.com> Co-authored-by: HongCheng <kwchenghong@gmail.com> Co-authored-by: Yicong <yicooong@qq.com> Co-authored-by: Yicooong <54353406+Yicooong@users.noreply.github.com> Co-authored-by: aJupyter <ajupyter@163.com> Co-authored-by: MING_X <119648793+MING-ZCH@users.noreply.github.com> Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com> Co-authored-by: HatBoy <null2none@163.com> Co-authored-by: ZhouXinAo <142309012+zxazys@users.noreply.github.com>
63 lines
1.8 KiB
Python
63 lines
1.8 KiB
Python
import requests
|
|
import os
|
|
import sys
|
|
import shutil
|
|
import zipfile
|
|
from openxlab.model import download
|
|
|
|
"""
|
|
Automatic download of model files from openxlab.
|
|
Currently only support openxlab automatic download, other platform model files need to be downloaded manually.
|
|
"""
|
|
|
|
if len(sys.argv) == 2:
|
|
model_repo = sys.argv[1]
|
|
else:
|
|
print("Usage: python download_model.py <model_repo>")
|
|
print("Example: python download_model.py jujimeizuo/EmoLLM_Model")
|
|
exit()
|
|
|
|
dir_name = "model"
|
|
|
|
if os.path.isdir(dir_name):
|
|
print("model file exist")
|
|
exit(0)
|
|
|
|
download_url = "https://code.openxlab.org.cn/api/v1/repos/{}/archive/main.zip".format(model_repo)
|
|
output_filename = "model_main.zip"
|
|
|
|
# download model file
|
|
response = requests.get(download_url, stream=True)
|
|
if response.status_code == 200:
|
|
with open(output_filename, "wb") as f:
|
|
for chunk in response.iter_content(chunk_size=1024):
|
|
if chunk: # filter out keep-alive new chunks
|
|
f.write(chunk)
|
|
print(f"Successfully downloaded model file")
|
|
else:
|
|
print(f"Failed to download the model file. HTTP status code: {response.status_code}")
|
|
exit()
|
|
|
|
if not os.path.isfile(output_filename):
|
|
raise FileNotFoundError(f"ZIP file '{output_filename}' not found in the current directory.")
|
|
|
|
temp_dir = f".{os.sep}temp_{os.path.splitext(os.path.basename(output_filename))[0]}"
|
|
os.makedirs(temp_dir, exist_ok=True)
|
|
|
|
with zipfile.ZipFile(output_filename, 'r') as zip_ref:
|
|
zip_ref.extractall(temp_dir)
|
|
|
|
top_level_dir = next(os.walk(temp_dir))[1][0]
|
|
|
|
|
|
source_dir = os.path.join(temp_dir, top_level_dir)
|
|
destination_dir = os.path.join(os.getcwd(), dir_name)
|
|
shutil.move(source_dir, destination_dir)
|
|
|
|
os.rmdir(temp_dir)
|
|
|
|
os.remove(output_filename)
|
|
|
|
download(model_repo='jujimeizuo/EmoLLM_Model', output='model')
|
|
|
|
print("Model bin file download complete") |