* feat: add agents/actions/write_markdown

* [ADD] add evaluation result of base model on 5/10 epochs

* Rename mother.json to mother_v1_2439.json

* Add files via upload

* [DOC] update README

* Update requirements.txt

update mpi4py installation

* Update README_EN.md

update English comma

* Update README.md

基于母亲角色的多轮对话模型微调完毕。已上传到 Huggingface。

* 多轮对话母亲角色的微调的脚本

* Update README.md

加上了王几行XING 和 思在 的作者信息

* Update README_EN.md

* Update README.md

* Update README_EN.md

* Update README_EN.md

* Changes to be committed:
	modified:   .gitignore
	modified:   README.md
	modified:   README_EN.md
	new file:   assets/EmoLLM_transparent.png
	deleted:    assets/Shusheng.jpg
	new file:   assets/Shusheng.png
	new file:   assets/aiwei_demo1.gif
	new file:   assets/aiwei_demo2.gif
	new file:   assets/aiwei_demo3.gif
	new file:   assets/aiwei_demo4.gif

* Update README.md

rectify aiwei_demo.gif

* Update README.md

rectify aiwei_demo style

* Changes to be committed:
	modified:   README.md
	modified:   README_EN.md

* Changes to be committed:
	modified:   README.md
	modified:   README_EN.md

* [Doc] update readme

* [Doc] update readme

* Update README.md

* Update README_EN.md

* Update README.md

* Update README_EN.md

* Delete datasets/mother_v1_2439.json

* Rename mother_v2_3838.json to mother_v2.json

* Delete datasets/mother_v2.json

* Add files via upload

* Update README.md

* Update README_EN.md

* [Doc] Update README_EN.md

minor fix

* InternLM2-Base-7B QLoRA微调模型 链接和测评结果更新

* add download_model.py script, automatic download of model libraries

* 清除图片的黑边、更新作者信息
	modified:   README.md
	new file:   assets/aiwei_demo.gif
	deleted:    assets/aiwei_demo1.gif
	modified:   assets/aiwei_demo2.gif
	modified:   assets/aiwei_demo3.gif
	modified:   assets/aiwei_demo4.gif

* rectify aiwei_demo transparent

* transparent

* modify: aiwei_demo table--->div

* modified:   aiwei_demo

* modify: div ---> table

* modified:   README.md

* modified:   README_EN.md

* update model config file links

* Create internlm2_20b_chat_lora_alpaca_e3.py

20b模型的配置文件

* Fix the bug that openxlab platform cannot be deployed.

* update model config file links

update model config file links

* Revert "update model config file links"

* [redo] update model config file links 20b

---------

Co-authored-by: jujimeizuo <fengzetao.zed@foxmail.com>
Co-authored-by: xzw <62385492+aJupyter@users.noreply.github.com>
Co-authored-by: Zeyu Ba <72795264+ZeyuBa@users.noreply.github.com>
Co-authored-by: Bryce Wang <90940753+brycewang2018@users.noreply.github.com>
Co-authored-by: zealot52099 <songyan5209@163.com>
Co-authored-by: HongCheng <kwchenghong@gmail.com>
Co-authored-by: Yicong <yicooong@qq.com>
Co-authored-by: Yicooong <54353406+Yicooong@users.noreply.github.com>
Co-authored-by: aJupyter <ajupyter@163.com>
Co-authored-by: MING_X <119648793+MING-ZCH@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: HatBoy <null2none@163.com>
Co-authored-by: ZhouXinAo <142309012+zxazys@users.noreply.github.com>
This commit is contained in:
Anooyman 2024-04-14 12:40:39 +08:00 committed by GitHub
parent 7ec43acf6a
commit dbe92d31f6
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
3 changed files with 15 additions and 4 deletions

View File

@ -50,7 +50,7 @@
| InternLM2_7B_chat | 全量微调 | [internlm2_chat_7b_full.py](./xtuner_config/internlm2_chat_7b_full.py) |
| InternLM2_7B_base | QLORA | [internlm2_7b_base_qlora_e10_M_1e4_32_64.py](./xtuner_config/internlm2_7b_base_qlora_e10_M_1e4_32_64.py) |
| InternLM2_1_8B_chat | 全量微调 | [internlm2_1_8b_full_alpaca_e3.py](./xtuner_config/internlm2_1_8b_full_alpaca_e3.py) |
| InternLM2_20B_chat | LORA | |
| InternLM2_20B_chat | LORA |[internlm2_20b_chat_lora_alpaca_e3.py](./xtuner_config/internlm2_20b_chat_lora_alpaca_e3.py)|
| Qwen_7b_chat | QLORA | [qwen_7b_chat_qlora_e3.py](./xtuner_config/qwen_7b_chat_qlora_e3.py) |
| Qwen1_5-0_5B-Chat | 全量微调 | [qwen1_5_0_5_B_full.py](./xtuner_config/qwen1_5_0_5_B_full.py) |
| Baichuan2_13B_chat | QLORA | [baichuan2_13b_chat_qlora_alpaca_e3.py](./xtuner_config/baichuan2_13b_chat_qlora_alpaca_e3.py) |

View File

@ -52,7 +52,7 @@
| InternLM2_7B_chat | full fine-tuning | [internlm2_chat_7b_full.py](./xtuner_config/internlm2_chat_7b_full.py) |
| InternLM2_7B_base | QLORA | [internlm2_7b_base_qlora_e10_M_1e4_32_64.py](./xtuner_config/internlm2_7b_base_qlora_e10_M_1e4_32_64.py) |
| InternLM2_1_8B_chat | full fine-tuning | [internlm2_1_8b_full_alpaca_e3.py](./xtuner_config/internlm2_1_8b_full_alpaca_e3.py) |
| InternLM2_20B_chat | LORA | |
| InternLM2_20B_chat | LORA |[internlm2_20b_chat_lora_alpaca_e3.py](./xtuner_config/internlm2_20b_chat_lora_alpaca_e3.py)|
| Qwen_7b_chat | QLORA | [qwen_7b_chat_qlora_e3.py](./xtuner_config/qwen_7b_chat_qlora_e3.py) |
| Qwen1_5-0_5B-Chat | full fine-tuning | [qwen1_5_0_5_B_full.py](./xtuner_config/qwen1_5_0_5_B_full.py) |
| Baichuan2_13B_chat | QLORA | [baichuan2_13b_chat_qlora_alpaca_e3.py](./xtuner_config/baichuan2_13b_chat_qlora_alpaca_e3.py) |

15
app.py
View File

@ -1,3 +1,14 @@
import os
os.system('streamlit run web_internlm2.py --server.address=0.0.0.0 --server.port 7860')
#os.system('streamlit run web_demo-aiwei.py --server.address=0.0.0.0 --server.port 7860')
#model = "EmoLLM_aiwei"
model = "EmoLLM_Model"
if model == "EmoLLM_aiwei":
os.system("python download_model.py ajupyter/EmoLLM_aiwei")
os.system('streamlit run web_demo-aiwei.py --server.address=0.0.0.0 --server.port 7860')
elif model == "EmoLLM_Model":
os.system("python download_model.py jujimeizuo/EmoLLM_Model")
os.system('streamlit run web_internlm2.py --server.address=0.0.0.0 --server.port 7860')
else:
print("Please select one model")