add download_model.py script, automatic download of model libraries

This commit is contained in:
HatBoy 2024-04-10 13:57:50 +08:00
parent 14b8b9cb15
commit 1c5d447fd0
10 changed files with 113 additions and 113 deletions

4
app.py
View File

@ -1,3 +1,3 @@
import os
# os.system('streamlit run web_internlm2.py --server.address=0.0.0.0 --server.port 7860')
os.system('streamlit run web_demo-aiwei.py --server.address=0.0.0.0 --server.port 7860')
os.system('streamlit run web_internlm2.py --server.address=0.0.0.0 --server.port 7860')
#os.system('streamlit run web_demo-aiwei.py --server.address=0.0.0.0 --server.port 7860')

BIN
assets/model.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 296 KiB

View File

@ -1,44 +1,7 @@
# EmoLLM 部署指南
## 本地部署
- Clone repo
```bash
git clone https://github.com/aJupyter/EmoLLM.git
```
- 安装依赖
```bash
pip install -r requirements.txt
```
- 下载模型
- 模型权重https://openxlab.org.cn/models/detail/jujimeizuo/EmoLLM_Model
- 通过 openxlab.model.download 下载,详情请看 [cli_internlm2](./cli_internlm2.py)
```bash
from openxlab.model import download
download(model_repo='jujimeizuo/EmoLLM_Model', output='model')
```
- 可以手动下载,放在 `./model` 目录下,然后把上面的代码删掉
- cli_demo
```bash
python ./demo/cli_internlm2.py
```
- web_demo
```bash
python ./app.py
```
如果在服务器上部署,需要配置本地端口映射
- 详见[快速体验](../docs/quick_start.md)
## OpenXLab 上部署

View File

@ -1,44 +1,7 @@
# Deploying Guide for EmoLLM
## Local Deployment
- Clone repo
```bash
git clone https://github.com/aJupyter/EmoLLM.git
```
- Install dependencies
```bash
pip install -r requirements.txt
```
- Download the model
- Model weightshttps://openxlab.org.cn/models/detail/jujimeizuo/EmoLLM_Model
- Download via openxlab.model.download, see [cli_internlm2](./cli_internlm2.py) for details
```bash
from openxlab.model import download
download(model_repo='jujimeizuo/EmoLLM_Model', output='model')
```
- You can also download manually and place it in the `./model` directory, then delete the above code.
- cli_demo
```bash
python ./demo/cli_internlm2.py
```
- web_demo
```bash
python ./app.py
```
If deploying on a server, you need to configure local port mapping.
- Please read [Quick Start](../docs/quick_start_EN.md) to see.
## Deploy on OpenXLab

View File

@ -14,21 +14,24 @@ git clone https://github.com/SmartFlowAI/EmoLLM.git
# cd EmoLLM
# pip install -r requirements.txt
```
- 3、安装 oss2 库用于自动下载模型requirements.txt 文件缺少该库)
- 3、下载模型文件可手动下载也可运行download_model.py 脚本自动下载模型文件。
- 3.1、自动下载模型文件,运行脚本:
```
# pip install oss2
# python download_model.py <model_repo>
# 运行 web_demo-aiwei.py 脚本对应的模型仓库地址是 ajupyter/EmoLLM_aiwei
# python download_model.py ajupyter/EmoLLM_aiwei
# 运行 web_internlm2.py 脚本对应的模型仓库地址是 jujimeizuo/EmoLLM_Model
# python download_model.py jujimeizuo/EmoLLM_Model
# 也可用该脚本自动下载其他模型。该脚本当前仅支持openxlab平台的模型自动下载其他平台的模型需要手动下载。下载成功后可看到EmoLLM目录下新增 model 目录,即模型文件目录。
```
- 4、【重要】下载模型目录选择你想要使用的模型手动下载去 openxlab如 [EmoLLM_Model](https://openxlab.org.cn/models/detail/jujimeizuo/EmoLLM_Model) 或者 Huggingface 等其他平台下载模型目录,将全部文件放在 `EmoLLM/model` 目录下。注意,这一步必须要有,否则自动下载会报错,打包下载时并不会下载 LFS如 pytorch_model-00001-of-00008.bin文件的完整文件只是一个引用而已。
- 5、下载模型特指模型中的 pytorch_model-XXX.bin 文件,可通过手动或者自动方式下载模型。
- 5.1、模型手动下载则去 openxlab如 [EmoLLM_Model](https://openxlab.org.cn/models/detail/jujimeizuo/EmoLLM_Model) 或者其他平台下载模型,将全部文件放在 `EmoLLM/model` 目录下,然后修改 web_demo-aiwei.py 或者 web_internlm2.py 文件app. Py 调用哪一个就修改哪一个文件)注释掉开头的下面代码,防止脚本自动重新下载模型:
- 3.2、手动下载模型文件目录,去 openxlab、Huggingface等平台下载完整的模型目录文件将全部文件放在 `EmoLLM/model` 目录下。注意,模型文件目录打包下载时并不会下载 LFS 文件(如 pytorch_model-00001-of-00008.bin需要挨个下载完整的 LFS 文件
![model](../assets/model.png)
- 4、运行脚本app.py仅用于调用web_demo-aiwei.py 或者 web_internlm2.py 文件想运行哪一个脚本就下载对应脚本的模型文件然后在app.py中注释另一个脚本即可。然后运行脚本
```
# download(model_repo='ajupyter/EmoLLM_aiwei',
# output='model')
python app.py
```
- 5.2、模型自动下载,进入 EmoLLM 目录后有三个文件,分别是 app.py、web_demo-aiwei.py、web_internlm2.py。app.py 是调用后面两个脚本的。想要调用哪一个脚本就注释掉另外一个脚本。并运行如下
```
# python ./app.py
```
- 注意:默认情况下 web_demo-aiwei.py 自动下载的是 openxlab 平台的 ajupyter/EmoLLM_aiwei 模型web_internlm2.py 下载的是 jujimeizuo/EmoLLM_Model 模型。脚本会自动将模型下载到 EmoLLM/model 目录,下载完成成功运行后,按照 5.1 的手动下载的操作修改 web_demo-aiwei.py 或者 web_internlm2.py 文件注释掉下载代码,防止下次运行重新下载。
6、运行 app.py 后等待模型下载并且加载完毕,可通过浏览器访问: http://0.0.0.0:7860 地址访问模型 web 页面。可修改 app.py 文件修改 web 页面访问端口。
7、替换模型EmoLLM 提供了多种开源模型,分别上传至 openxlab、Huggingface 平台,有[爹系男友心理咨询师 ](https://openxlab.org.cn/models/detail/chg0901/EmoLLM_Daddy-like_BF)、[老母亲心理咨询师](https://huggingface.co/brycewang2018/EmoLLM-mother/tree/main)、[温柔御姐心理医生艾薇](https://openxlab.org.cn/models/detail/ajupyter/EmoLLM_aiwei)等角色,有 EmoLLM_internlm2_7b_full、EmoLLM-InternLM7B-base-10e 等多个模型可选择,目前评测 [EmoLLM_internlm2_7b_ful](https://openxlab.org.cn/models/detail/ajupyter/EmoLLM_internlm2_7b_full) 模型效果较好。可重复步骤 4、5手动或自动下载相关模型放在 `EmoLLM/model` 目录下或者修改 web_demo-aiwei.py 或者 web_internlm2.py 文件的 download 函数仓库地址自动下载。
5、运行 app.py 后通过浏览器访问: http://0.0.0.0:7860 地址访问模型 web 页面。可修改 app.py 文件修改 web 页面访问端口,即可正常体验该模型。如果在服务器上部署,需要配置本地端口映射。
6、替换模型EmoLLM 提供了多种开源模型,分别上传至 openxlab、Huggingface 平台,有[爹系男友心理咨询师 ](https://openxlab.org.cn/models/detail/chg0901/EmoLLM_Daddy-like_BF)、[老母亲心理咨询师](https://huggingface.co/brycewang2018/EmoLLM-mother/tree/main)、[温柔御姐心理医生艾薇](https://openxlab.org.cn/models/detail/ajupyter/EmoLLM_aiwei)等角色,有 EmoLLM_internlm2_7b_full、EmoLLM-InternLM7B-base-10e 等多个模型可选择。可重复步骤 3、4手动或自动下载相关模型放在 `EmoLLM/model` 目录下,然后运行体验。

View File

@ -14,23 +14,24 @@ git clone https://github.com/SmartFlowAI/EmoLLM.git
# cd EmoLLM
# pip install -r requirements.txt
```
- 3. Install the oss2 library for automatic model download (missing from the requirements.txt file):
- 3. Download the model files, either manually or by running the download_model.py script.
- 3.1. Automatically download the model file and run the script:
```
# pip install oss2
# python download_model.py <model_repo>
# Run the web_demo-aiwei.py script to run the model repository at ajupyter/EmoLLM_aiwei, ie:
# python download_model.py ajupyter/EmoLLM_aiwei
# Run the web_internlm2.py script to run the model repository at jujimeizuo/EmoLLM_Mode, ie:
# python download_model.py jujimeizuo/EmoLLM_Model
# This script can also be used to automatically download other models. This script only supports automatic download of models from openxlab platform, models from other platforms need to be downloaded manually. After successful download, you can see a new model directory under EmoLLM directory, i.e. the model file directory.
```
- 4. Important:Download the model catalog, select the model you want to use, to download the model catalog manually, go to openxlab (like [EmoLLM_Model](https://openxlab.org.cn/models/detail/jujimeizuo/EmoLLM_Model)) or Huggingface or other platforms, Place all files in the `EmoLLM/model` directory. Note that this step is mandatory, otherwise the automatic download will report an error, The package download does not download the full LFS (like pytorch_model-00001-of-00008.bin) file, just a reference to it.
- 5. Download the model, specifically the pytorch_model-XXX.bin file in the model,
models can be downloaded manually or automatically.
- 5.1. Models can be downloaded manually from openxlab or other platforms, put all the files in the `EmoLLM/model` directory, and then modify the web_demo-aiwei.py or web_internlm2.py files (whichever one is called by app.py) by commenting out the following code at the beginning, to prevent the script from automatically re-downloading the model
- 3.2. To download the model file directory manually, go to openxlab, Huggingface, etc. to download the complete model directory file, and put all the files in the `EmoLLM/model` directory. Note that the LFS file (e.g. pytorch_model-00001-of-00008.bin) is not downloaded when the model file directory is packaged for download, so you need to download the full LFS file one by one.
![model](../assets/model.png)
- 4. Run the script, app.py is only used to call web_demo-aiwei.py or web_internlm2.py file, you can download the model file of the corresponding script for whichever script you want to run, and then comment the other script in app.py. Then run the script:
```
# download(model_repo='ajupyter/EmoLLM_aiwei',
# output='model')
python app.py
```
- 5.2. Automatic model download, There are three files in the EmoLLM directory, app.py、web_demo-aiwei.py、web_internlm2.py, app.py calls the last two scripts. Comment out the other script if you want to call it. Run:
```
# python ./app.py
```
- Note: By default, web_demo-aiwei.py automatically downloads the ajupyter/EmoLLM_aiwei model of openxlab platform, and web_internlm2.py downloads the jujimeizuo/EmoLLM_Model model. The script will automatically download the model to EmoLLM/model directory, after the download is completed and run successfully, follow the manual download operation in 5.1 to modify the web_demo-aiwei.py or web_internlm2.py file to comment out the download code, so as to prevent re-downloading in the next run.
6、 Run app.py and wait for the model to download and load. Then access the model web page through your browser by going to: http://0.0.0.0:7860 address. You can modify the app.py file to change the web page access port.
7. Use of other models, EmoLLM offers several versions of the open source model, uploaded to openxlab、Huggingface, and other platforms. There are roles such as the [father's boyfriend counselor](https://openxlab.org.cn/models/detail/chg0901/EmoLLM_Daddy-like_BF), [the old mother's counselor](https://huggingface.co/brycewang2018/EmoLLM-mother/tree/main), and [the gentle royal psychiatrist](https://openxlab.org.cn/models/detail/ajupyter/EmoLLM_aiwei). There are several models to choose from such as EmoLLM_internlm2_7b_full, EmoLLM-InternLM7B-base-10e and so on.
Current review [EmoLLM_internlm2_7b_ful] (https://openxlab.org.cn/models/detail/ajupyter/EmoLLM_internlm2_7b_full) model works better. You can repeat steps 4 and 5 to download the model manually or automatically in the `EmoLLM/model` directory or modify the download function in web_demo-aiwei.py or web_internlm2.py to download the model automatically.
- 5. After running app.py, you can access the model's web page through your browser at the following address: http://0.0.0.0:7860. You can modify the app.py file to change the web page access port to experience the model normally. If you are deploying on a server, you need to configure local port mapping.
- 6. Use of other models, EmoLLM offers several versions of the open source model, uploaded to openxlab、Huggingface, and other platforms. There are roles such as the [father's boyfriend counselor](https://openxlab.org.cn/models/detail/chg0901/EmoLLM_Daddy-like_BF), [the old mother's counselor](https://huggingface.co/brycewang2018/EmoLLM-mother/tree/main), and [the gentle royal psychiatrist](https://openxlab.org.cn/models/detail/ajupyter/EmoLLM_aiwei). There are several models to choose from such as EmoLLM_internlm2_7b_full, EmoLLM-InternLM7B-base-10e and so on. Repeat steps 3 and 4 to manually or automatically download the model in the `EmoLLM/model` directory and run the experience.

63
download_model.py Normal file
View File

@ -0,0 +1,63 @@
import requests
import os
import sys
import shutil
import zipfile
from openxlab.model import download
"""
Automatic download of model files from openxlab.
Currently only support openxlab automatic download, other platform model files need to be downloaded manually.
"""
if len(sys.argv) == 2:
model_repo = sys.argv[1]
else:
print("Usage: python download_model.py <model_repo>")
print("Example: python download_model.py jujimeizuo/EmoLLM_Model")
exit()
dir_name = "model"
if os.path.isdir(dir_name):
print("model file exist")
exit(0)
download_url = "https://code.openxlab.org.cn/api/v1/repos/{}/archive/main.zip".format(model_repo)
output_filename = "model_main.zip"
# download model file
response = requests.get(download_url, stream=True)
if response.status_code == 200:
with open(output_filename, "wb") as f:
for chunk in response.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
print(f"Successfully downloaded model file")
else:
print(f"Failed to download the model file. HTTP status code: {response.status_code}")
exit()
if not os.path.isfile(output_filename):
raise FileNotFoundError(f"ZIP file '{output_filename}' not found in the current directory.")
temp_dir = f".{os.sep}temp_{os.path.splitext(os.path.basename(output_filename))[0]}"
os.makedirs(temp_dir, exist_ok=True)
with zipfile.ZipFile(output_filename, 'r') as zip_ref:
zip_ref.extractall(temp_dir)
top_level_dir = next(os.walk(temp_dir))[1][0]
source_dir = os.path.join(temp_dir, top_level_dir)
destination_dir = os.path.join(os.getcwd(), dir_name)
shutil.move(source_dir, destination_dir)
os.rmdir(temp_dir)
os.remove(output_filename)
download(model_repo='jujimeizuo/EmoLLM_Model', output='model')
print("Model bin file download complete")

View File

@ -7,4 +7,6 @@ accelerate==0.24.1
transformers_stream_generator==0.0.4
openxlab
tiktoken
einops
einops
oss2
requests

View File

@ -9,6 +9,7 @@ Please run with the command `streamlit run path/to/web_demo.py --server.address=
Using `python path/to/web_demo.py` may cause unknown problems.
"""
import copy
import os
import warnings
from dataclasses import asdict, dataclass
from typing import Callable, List, Optional
@ -24,8 +25,10 @@ from openxlab.model import download
logger = logging.get_logger(__name__)
download(model_repo='ajupyter/EmoLLM_aiwei',
output='model')
if not os.path.isdir("model"):
print("[ERROR] not find model dir")
exit(0)
@dataclass
class GenerationConfig:

View File

@ -9,6 +9,7 @@ Please run with the command `streamlit run path/to/web_demo.py --server.address=
Using `python path/to/web_demo.py` may cause unknown problems.
"""
import copy
import os
import warnings
from dataclasses import asdict, dataclass
from typing import Callable, List, Optional
@ -24,8 +25,9 @@ from openxlab.model import download
logger = logging.get_logger(__name__)
download(model_repo='jujimeizuo/EmoLLM_Model',
output='model')
if not os.path.isdir("model"):
print("[ERROR] not find model dir")
exit(0)
@dataclass
class GenerationConfig: