diff --git a/README.md b/README.md
index 61ea6ab..0fa50cc 100644
--- a/README.md
+++ b/README.md
@@ -157,7 +157,7 @@
-### 🎯框架图
+### 🔗框架图
@@ -170,10 +170,11 @@
- [🎇最近更新](#最近更新)
- [🏆荣誉栏](#荣誉栏)
- [🎯路线图](#路线图)
- - [🎯框架图](#框架图)
+ - [🔗框架图](#框架图)
- [目录](#目录)
- [开发前的配置要求](#开发前的配置要求)
- [**使用指南**](#使用指南)
+ - [快速体验](#快速体验)
- [数据构建](#数据构建)
- [微调指南](#微调指南)
- [部署指南](#部署指南)
@@ -201,12 +202,19 @@ git clone https://github.com/SmartFlowAI/EmoLLM.git
```
2. 依次阅读或者选择感兴趣的部分阅读:
+ - [快速体验](#快速体验)
- [数据构建](#数据构建)
- [微调指南](#微调指南)
- [部署指南](#部署指南)
- [RAG](#rag检索增强生成pipeline)
- 查看更多详情
+
+### 快速体验
+
+- 请阅读[快速体验](docs/quick_start.md)查阅
+
+
### 数据构建
- 请阅读[数据构建指南](generate_data/tutorial.md)查阅
diff --git a/README_EN.md b/README_EN.md
index 345bcc1..31a426e 100644
--- a/README_EN.md
+++ b/README_EN.md
@@ -199,24 +199,16 @@ git clone https://github.com/SmartFlowAI/EmoLLM.git
```
1. Read in sequence or read sections you're interested in:
- - [File Directory Explanation](#file-directory-explanation)
+ - [Quick Start](#quick-start)
- [Data Construction](#data-construction)
- [Fine-tuning Guide](#fine-tuning-guide)
- [Deployment Guide](#deployment-guide)
+ - [RAG][]
- View More Details
-### File Directory Explanation
-```
-├─assets: Image Resources
-├─datasets: Dataset
-├─demo: demo scripts
-├─generate_data: Data Generation Guide
-│ └─xinghuo
-├─scripts: Some Available Tools
-└─xtuner_config:Fine-tuning Guide
- └─images
-```
+### Quick start
+- Please read [Quick Start](docs/quick_start_EN.md) to see.
### Data Construction
diff --git a/docs/quick_start.md b/docs/quick_start.md
new file mode 100644
index 0000000..5d2473b
--- /dev/null
+++ b/docs/quick_start.md
@@ -0,0 +1,34 @@
+### 1、部署环境
+- 操作系统:Ubuntu 22.04.4 LTS
+- CPU:Intel (R) Xeon (R) CPU E 5-2650,32G(在线 GPU 服务器)
+- 显卡:NVIDIA RTX 4060Ti 16G,NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2
+- Python 3.11.5
+
+### 2、默认部署步骤
+- 1、Clone 代码或者手动下载代码放置服务器:
+```
+git clone https://github.com/SmartFlowAI/EmoLLM.git
+```
+- 2、安装 Python 依赖库:
+```
+# cd EmoLLM
+# pip install -r requirements.txt
+```
+- 3、安装 oss2 库用于自动下载模型(requirements.txt 文件缺少该库)
+```
+# pip install oss2
+```
+- 4、【重要】下载模型目录,选择你想要使用的模型,手动下载去 openxlab(如 [EmoLLM_Model](https://openxlab.org.cn/models/detail/jujimeizuo/EmoLLM_Model)) 或者 Huggingface 等其他平台下载模型目录,将全部文件放在 `EmoLLM/model` 目录下。注意,这一步必须要有,否则自动下载会报错,打包下载时并不会下载 LFS(如 pytorch_model-00001-of-00008.bin)文件的完整文件只是一个引用而已。
+- 5、下载模型,特指模型中的 pytorch_model-XXX.bin 文件,可通过手动或者自动方式下载模型。
+- 5.1、模型手动下载则去 openxlab(如 [EmoLLM_Model](https://openxlab.org.cn/models/detail/jujimeizuo/EmoLLM_Model)) 或者其他平台下载模型,将全部文件放在 `EmoLLM/model` 目录下,然后修改 web_demo-aiwei.py 或者 web_internlm2.py 文件(app. Py 调用哪一个就修改哪一个文件)注释掉开头的下面代码,防止脚本自动重新下载模型:
+```
+# download(model_repo='ajupyter/EmoLLM_aiwei',
+# output='model')
+```
+- 5.2、模型自动下载,进入 EmoLLM 目录后有三个文件,分别是 app.py、web_demo-aiwei.py、web_internlm2.py。app.py 是调用后面两个脚本的。想要调用哪一个脚本就注释掉另外一个脚本。并运行如下
+```
+# python ./app.py
+```
+- 注意:默认情况下 web_demo-aiwei.py 自动下载的是 openxlab 平台的 ajupyter/EmoLLM_aiwei 模型,web_internlm2.py 下载的是 jujimeizuo/EmoLLM_Model 模型。脚本会自动将模型下载到 EmoLLM/model 目录,下载完成成功运行后,按照 5.1 的手动下载的操作修改 web_demo-aiwei.py 或者 web_internlm2.py 文件注释掉下载代码,防止下次运行重新下载。
+6、运行 app.py 后等待模型下载并且加载完毕,可通过浏览器访问: http://0.0.0.0:7860 地址访问模型 web 页面。可修改 app.py 文件修改 web 页面访问端口。
+7、替换模型,EmoLLM 提供了多种开源模型,分别上传至 openxlab、Huggingface 平台,有[爹系男友心理咨询师 ](https://openxlab.org.cn/models/detail/chg0901/EmoLLM_Daddy-like_BF)、[老母亲心理咨询师](https://huggingface.co/brycewang2018/EmoLLM-mother/tree/main)、[温柔御姐心理医生艾薇](https://openxlab.org.cn/models/detail/ajupyter/EmoLLM_aiwei)等角色,有 EmoLLM_internlm2_7b_full、EmoLLM-InternLM7B-base-10e 等多个模型可选择,目前评测 [EmoLLM_internlm2_7b_ful](https://openxlab.org.cn/models/detail/ajupyter/EmoLLM_internlm2_7b_full) 模型效果较好。可重复步骤 4、5手动或自动下载相关模型放在 `EmoLLM/model` 目录下或者修改 web_demo-aiwei.py 或者 web_internlm2.py 文件的 download 函数仓库地址自动下载。
\ No newline at end of file
diff --git a/docs/quick_start_EN.md b/docs/quick_start_EN.md
new file mode 100644
index 0000000..29a4788
--- /dev/null
+++ b/docs/quick_start_EN.md
@@ -0,0 +1,36 @@
+### 1. Deployment Environment
+- Operating system: Ubuntu 22.04.4 LTS
+- CPU: Intel (R) Xeon (R) CPU E 5-2650, 32G
+- Graphics card: NVIDIA RTX 4060Ti 16G, NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2
+- Python 3.11.5
+
+### 2. Default Deployment Steps
+- 1. Clone the code or manually download the code and place it on the server:
+```
+git clone https://github.com/SmartFlowAI/EmoLLM.git
+```
+- 2. Install Python dependencies:
+```
+# cd EmoLLM
+# pip install -r requirements.txt
+```
+- 3. Install the oss2 library for automatic model download (missing from the requirements.txt file):
+```
+# pip install oss2
+```
+- 4. Important:Download the model catalog, select the model you want to use, to download the model catalog manually, go to openxlab (like [EmoLLM_Model](https://openxlab.org.cn/models/detail/jujimeizuo/EmoLLM_Model)) or Huggingface or other platforms, Place all files in the `EmoLLM/model` directory. Note that this step is mandatory, otherwise the automatic download will report an error, The package download does not download the full LFS (like pytorch_model-00001-of-00008.bin) file, just a reference to it.
+- 5. Download the model, specifically the pytorch_model-XXX.bin file in the model,
+models can be downloaded manually or automatically.
+- 5.1. Models can be downloaded manually from openxlab or other platforms, put all the files in the `EmoLLM/model` directory, and then modify the web_demo-aiwei.py or web_internlm2.py files (whichever one is called by app.py) by commenting out the following code at the beginning, to prevent the script from automatically re-downloading the model:
+```
+# download(model_repo='ajupyter/EmoLLM_aiwei',
+# output='model')
+```
+- 5.2. Automatic model download, There are three files in the EmoLLM directory, app.py、web_demo-aiwei.py、web_internlm2.py, app.py calls the last two scripts. Comment out the other script if you want to call it. Run:
+```
+# python ./app.py
+```
+- Note: By default, web_demo-aiwei.py automatically downloads the ajupyter/EmoLLM_aiwei model of openxlab platform, and web_internlm2.py downloads the jujimeizuo/EmoLLM_Model model. The script will automatically download the model to EmoLLM/model directory, after the download is completed and run successfully, follow the manual download operation in 5.1 to modify the web_demo-aiwei.py or web_internlm2.py file to comment out the download code, so as to prevent re-downloading in the next run.
+6、 Run app.py and wait for the model to download and load. Then access the model web page through your browser by going to: http://0.0.0.0:7860 address. You can modify the app.py file to change the web page access port.
+7. Use of other models, EmoLLM offers several versions of the open source model, uploaded to openxlab、Huggingface, and other platforms. There are roles such as the [father's boyfriend counselor](https://openxlab.org.cn/models/detail/chg0901/EmoLLM_Daddy-like_BF), [the old mother's counselor](https://huggingface.co/brycewang2018/EmoLLM-mother/tree/main), and [the gentle royal psychiatrist](https://openxlab.org.cn/models/detail/ajupyter/EmoLLM_aiwei). There are several models to choose from such as EmoLLM_internlm2_7b_full, EmoLLM-InternLM7B-base-10e and so on.
+Current review [EmoLLM_internlm2_7b_ful] (https://openxlab.org.cn/models/detail/ajupyter/EmoLLM_internlm2_7b_full) model works better. You can repeat steps 4 and 5 to download the model manually or automatically in the `EmoLLM/model` directory or modify the download function in web_demo-aiwei.py or web_internlm2.py to download the model automatically.
\ No newline at end of file