[Doc] Update readme (#171)

This commit is contained in:
xzw 2024-04-09 12:53:49 +08:00 committed by GitHub
commit 5abda388b3
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
4 changed files with 94 additions and 22 deletions

View File

@ -157,7 +157,7 @@
<img src="assets/Roadmap_ZH.png" alt="Roadmap_ZH">
</a>
### 🎯框架图
### 🔗框架图
<p align="center">
<a href="https://github.com/SmartFlowAI/EmoLLM/">
@ -170,10 +170,11 @@
- [🎇最近更新](#最近更新)
- [🏆荣誉栏](#荣誉栏)
- [🎯路线图](#路线图)
- [🎯框架图](#框架图)
- [🔗框架图](#框架图)
- [目录](#目录)
- [开发前的配置要求](#开发前的配置要求)
- [**使用指南**](#使用指南)
- [快速体验](#快速体验)
- [数据构建](#数据构建)
- [微调指南](#微调指南)
- [部署指南](#部署指南)
@ -201,28 +202,35 @@ git clone https://github.com/SmartFlowAI/EmoLLM.git
```
2. 依次阅读或者选择感兴趣的部分阅读:
- [快速体验](#快速体验)
- [数据构建](#数据构建)
- [微调指南](#微调指南)
- [部署指南](#部署指南)
- [RAG](#rag检索增强生成pipeline)
- 查看更多详情
### 数据构建
### 🍪快速体验
- 请阅读[快速体验](docs/quick_start.md)查阅
### 📌数据构建
- 请阅读[数据构建指南](generate_data/tutorial.md)查阅
- 微调用到的数据集见[datasets](datasets/data.json)
### 微调指南
### 🎨微调指南
详见[微调指南](xtuner_config/README.md)
### 部署指南
### 🔧部署指南
- Demo部署详见[部署指南](demo/README.md)
- 基于[LMDeploy](https://github.com/InternLM/lmdeploy/)的量化部署:详见[deploy](./deploy/lmdeploy.md)
### RAG(检索增强生成)Pipeline
### RAG(检索增强生成)Pipeline
- 详见[RAG](./rag/)
@ -304,6 +312,7 @@ git clone https://github.com/SmartFlowAI/EmoLLM.git
- [闻星大佬(小助手)](https://github.com/vansin)
- [扫地升(公众号宣传)](https://mp.weixin.qq.com/s/78lrRl2tlXEKUfElnkVx4A)
- 阿布(北大心理学硕士)
- [HatBoy](https://github.com/hatboy)
<!-- links -->

View File

@ -199,41 +199,33 @@ git clone https://github.com/SmartFlowAI/EmoLLM.git
```
1. Read in sequence or read sections you're interested in
- [File Directory Explanation](#file-directory-explanation)
- [Quick Start](#quick-start)
- [Data Construction](#data-construction)
- [Fine-tuning Guide](#fine-tuning-guide)
- [Deployment Guide](#deployment-guide)
- [RAG](#rag-retrieval-augmented-generation-pipeline)
- View More Details
### File Directory Explanation
```
├─assets: Image Resources
├─datasets: Dataset
├─demo: demo scripts
├─generate_data: Data Generation Guide
│ └─xinghuo
├─scripts: Some Available Tools
└─xtuner_configFine-tuning Guide
└─images
```
### 🍪Quick start
- Please read [Quick Start](docs/quick_start_EN.md) to see.
### Data Construction
### 📌Data Construction
- Please read the [Data Construction Guide ](generate_data/tutorial_EN.md)for reference.
- The dataset used for this fine-tuning can be found at [datasets](datasets/data.json)
### Fine-tuning Guide
### 🎨Fine-tuning Guide
For details, see the [fine-tuning guide](xtuner_config/README_EN.md)
### Deployment Guide
### 🔧Deployment Guide
- Demo deployment: see [deployment guide](./demo/README_EN.md) for details.
- Quantitative deployment based on [LMDeploy](https://github.com/InternLM/lmdeploy/): see [deploy](./deploy/lmdeploy_EN.md)
### RAG (Retrieval Augmented Generation) Pipeline
### RAG (Retrieval Augmented Generation) Pipeline
- See [RAG](./rag/)
@ -307,6 +299,7 @@ The project is licensed under the MIT License. Please refer to the details
- [Vanin](https://github.com/vansin)
- [Bloom up (WeChat Official Account Promotion)](https://mp.weixin.qq.com/s/78lrRl2tlXEKUfElnkVx4A)
- Abu (M.A. in Psychology, Peking University)
- [HatBoy](https://github.com/hatboy)
<!-- links -->

34
docs/quick_start.md Normal file
View File

@ -0,0 +1,34 @@
### 1、部署环境
- 操作系统Ubuntu 22.04.4 LTS
- CPUIntel (R) Xeon (R) CPU E 5-265032G在线 GPU 服务器)
- 显卡NVIDIA RTX 4060Ti 16GNVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2
- Python 3.11.5
### 2、默认部署步骤
- 1、Clone 代码或者手动下载代码放置服务器:
```
git clone https://github.com/SmartFlowAI/EmoLLM.git
```
- 2、安装 Python 依赖库:
```
# cd EmoLLM
# pip install -r requirements.txt
```
- 3、安装 oss2 库用于自动下载模型requirements.txt 文件缺少该库)
```
# pip install oss2
```
- 4、【重要】下载模型目录选择你想要使用的模型手动下载去 openxlab如 [EmoLLM_Model](https://openxlab.org.cn/models/detail/jujimeizuo/EmoLLM_Model) 或者 Huggingface 等其他平台下载模型目录,将全部文件放在 `EmoLLM/model` 目录下。注意,这一步必须要有,否则自动下载会报错,打包下载时并不会下载 LFS如 pytorch_model-00001-of-00008.bin文件的完整文件只是一个引用而已。
- 5、下载模型特指模型中的 pytorch_model-XXX.bin 文件,可通过手动或者自动方式下载模型。
- 5.1、模型手动下载则去 openxlab如 [EmoLLM_Model](https://openxlab.org.cn/models/detail/jujimeizuo/EmoLLM_Model) 或者其他平台下载模型,将全部文件放在 `EmoLLM/model` 目录下,然后修改 web_demo-aiwei.py 或者 web_internlm2.py 文件app. Py 调用哪一个就修改哪一个文件)注释掉开头的下面代码,防止脚本自动重新下载模型:
```
# download(model_repo='ajupyter/EmoLLM_aiwei',
# output='model')
```
- 5.2、模型自动下载,进入 EmoLLM 目录后有三个文件,分别是 app.py、web_demo-aiwei.py、web_internlm2.py。app.py 是调用后面两个脚本的。想要调用哪一个脚本就注释掉另外一个脚本。并运行如下
```
# python ./app.py
```
- 注意:默认情况下 web_demo-aiwei.py 自动下载的是 openxlab 平台的 ajupyter/EmoLLM_aiwei 模型web_internlm2.py 下载的是 jujimeizuo/EmoLLM_Model 模型。脚本会自动将模型下载到 EmoLLM/model 目录,下载完成成功运行后,按照 5.1 的手动下载的操作修改 web_demo-aiwei.py 或者 web_internlm2.py 文件注释掉下载代码,防止下次运行重新下载。
6、运行 app.py 后等待模型下载并且加载完毕,可通过浏览器访问: http://0.0.0.0:7860 地址访问模型 web 页面。可修改 app.py 文件修改 web 页面访问端口。
7、替换模型EmoLLM 提供了多种开源模型,分别上传至 openxlab、Huggingface 平台,有[爹系男友心理咨询师 ](https://openxlab.org.cn/models/detail/chg0901/EmoLLM_Daddy-like_BF)、[老母亲心理咨询师](https://huggingface.co/brycewang2018/EmoLLM-mother/tree/main)、[温柔御姐心理医生艾薇](https://openxlab.org.cn/models/detail/ajupyter/EmoLLM_aiwei)等角色,有 EmoLLM_internlm2_7b_full、EmoLLM-InternLM7B-base-10e 等多个模型可选择,目前评测 [EmoLLM_internlm2_7b_ful](https://openxlab.org.cn/models/detail/ajupyter/EmoLLM_internlm2_7b_full) 模型效果较好。可重复步骤 4、5手动或自动下载相关模型放在 `EmoLLM/model` 目录下或者修改 web_demo-aiwei.py 或者 web_internlm2.py 文件的 download 函数仓库地址自动下载。

36
docs/quick_start_EN.md Normal file
View File

@ -0,0 +1,36 @@
### 1. Deployment Environment
- Operating system: Ubuntu 22.04.4 LTS
- CPU: Intel (R) Xeon (R) CPU E 5-2650, 32G
- Graphics card: NVIDIA RTX 4060Ti 16G, NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2
- Python 3.11.5
### 2. Default Deployment Steps
- 1. Clone the code or manually download the code and place it on the server:
```
git clone https://github.com/SmartFlowAI/EmoLLM.git
```
- 2. Install Python dependencies:
```
# cd EmoLLM
# pip install -r requirements.txt
```
- 3. Install the oss2 library for automatic model download (missing from the requirements.txt file):
```
# pip install oss2
```
- 4. Important:Download the model catalog, select the model you want to use, to download the model catalog manually, go to openxlab (like [EmoLLM_Model](https://openxlab.org.cn/models/detail/jujimeizuo/EmoLLM_Model)) or Huggingface or other platforms, Place all files in the `EmoLLM/model` directory. Note that this step is mandatory, otherwise the automatic download will report an error, The package download does not download the full LFS (like pytorch_model-00001-of-00008.bin) file, just a reference to it.
- 5. Download the model, specifically the pytorch_model-XXX.bin file in the model,
models can be downloaded manually or automatically.
- 5.1. Models can be downloaded manually from openxlab or other platforms, put all the files in the `EmoLLM/model` directory, and then modify the web_demo-aiwei.py or web_internlm2.py files (whichever one is called by app.py) by commenting out the following code at the beginning, to prevent the script from automatically re-downloading the model
```
# download(model_repo='ajupyter/EmoLLM_aiwei',
# output='model')
```
- 5.2. Automatic model download, There are three files in the EmoLLM directory, app.py、web_demo-aiwei.py、web_internlm2.py, app.py calls the last two scripts. Comment out the other script if you want to call it. Run:
```
# python ./app.py
```
- Note: By default, web_demo-aiwei.py automatically downloads the ajupyter/EmoLLM_aiwei model of openxlab platform, and web_internlm2.py downloads the jujimeizuo/EmoLLM_Model model. The script will automatically download the model to EmoLLM/model directory, after the download is completed and run successfully, follow the manual download operation in 5.1 to modify the web_demo-aiwei.py or web_internlm2.py file to comment out the download code, so as to prevent re-downloading in the next run.
6、 Run app.py and wait for the model to download and load. Then access the model web page through your browser by going to: http://0.0.0.0:7860 address. You can modify the app.py file to change the web page access port.
7. Use of other models, EmoLLM offers several versions of the open source model, uploaded to openxlab、Huggingface, and other platforms. There are roles such as the [father's boyfriend counselor](https://openxlab.org.cn/models/detail/chg0901/EmoLLM_Daddy-like_BF), [the old mother's counselor](https://huggingface.co/brycewang2018/EmoLLM-mother/tree/main), and [the gentle royal psychiatrist](https://openxlab.org.cn/models/detail/ajupyter/EmoLLM_aiwei). There are several models to choose from such as EmoLLM_internlm2_7b_full, EmoLLM-InternLM7B-base-10e and so on.
Current review [EmoLLM_internlm2_7b_ful] (https://openxlab.org.cn/models/detail/ajupyter/EmoLLM_internlm2_7b_full) model works better. You can repeat steps 4 and 5 to download the model manually or automatically in the `EmoLLM/model` directory or modify the download function in web_demo-aiwei.py or web_internlm2.py to download the model automatically.