6cd83b6b0e
The English version README files of the following documents are created and translated. 1. demo/README.md 2. evaluate/README.md 3. xtuner_config/README.md 4. xtuner_config/images/README.md 5. xtuner_config/ChatGLM3-6b-ft.md There are some format problem and language expression in the Chinese version, I also adapted them. By the way, I modified the file name of `evaluate/General evaluation.md` and `evaluate/Professional evaluation.md` since they are shown in the `xtuner_config/README.md` |
||
---|---|---|
.. | ||
cli_internlm2.py | ||
cli_qwen.py | ||
README_EN.md | ||
README.md | ||
requirements_qwen.txt | ||
web_qwen.py |
Deploying Guide for EmoLLM
Local Deployment
- Clone repo
git clone https://github.com/aJupyter/EmoLLM.git
- Install dependencies
pip install -r requirements.txt
-
Download the model
-
Model weights:https://openxlab.org.cn/models/detail/jujimeizuo/EmoLLM_Model
-
Download via openxlab.model.download, see cli_internlm2 for details
from openxlab.model import download download(model_repo='jujimeizuo/EmoLLM_Model', output='model')
-
You can also download manually and place it in the
./model
directory, then delete the above code.
-
-
cli_demo
python ./demo/cli_internlm2.py
- web_demo
python ./app.py
If deploying on a server, you need to configure local port mapping.
Deploy on OpenXLab
- Log in to OpenXLab and create a Gradio application
- Select configurations and create the project
- Wait for the build and startup
- Try your own project