000491f1be
* Add files via upload * 新增ENmd文档 * Update README.md * Update README_EN.md * Update LICENSE * [docs] update lmdeploy file * add ocr.md * Update tutorial.md * Update tutorial_EN.md * Update General_evaluation_EN.md * Update General_evaluation_EN.md * Update README.md Add InternLM2_7B_chat_full's professional evaluation results * Update Professional_evaluation.md * Update Professional_evaluation.md * Update Professional_evaluation.md * Update Professional_evaluation.md * Update Professional_evaluation_EN.md * Update README.md * Update README.md * Update README_EN.md * Update README_EN.md * Update README_EN.md * [DOC] update readme * Update LICENSE * Update LICENSE * update personal info and small format optimizations * update personal info and translations for contents in a table * Update RAG README * Update demo link in README.md * Update xlab app link * Update xlab link * add xlab model * Update web_demo-aiwei.py * add bitex --------- Co-authored-by: xzw <62385492+aJupyter@users.noreply.github.com> Co-authored-by: এ許我辞忧࿐♡ <127636623+Smiling-Weeping-zhr@users.noreply.github.com> Co-authored-by: Vicky <vicky_3021@163.com> Co-authored-by: MING_X <119648793+MING-ZCH@users.noreply.github.com> Co-authored-by: Nobody-ML <1755309985@qq.com> Co-authored-by: 8baby8 <3345710651@qq.com> Co-authored-by: chaoke <101492509+8baby8@users.noreply.github.com> Co-authored-by: aJupyter <ajupyter@163.com> Co-authored-by: HongCheng <kwchenghong@gmail.com> Co-authored-by: santiagoTOP <“1537211712top@gmail.com”> |
||
---|---|---|
.. | ||
data_dir | ||
General_evaluation_EN.md | ||
General_evaluation.md | ||
InternLM2_7B_chat_eval.py | ||
metric.py | ||
Professional_evaluation_EN.md | ||
Professional_evaluation.md | ||
Qwen1_5-0_5B-Chat_eval.py | ||
qwen_generation_utils.py | ||
README_EN.md | ||
README.md |
EmoLLM Evaluation
General Metrics Evaluation
- For specific metrics and methods, see General_evaluation_EN.md
Model | ROUGE-1 | ROUGE-2 | ROUGE-L | BLEU-1 | BLEU-2 | BLEU-3 | BLEU-4 |
---|---|---|---|---|---|---|---|
Qwen1_5-0_5B-chat | 27.23% | 8.55% | 17.05% | 26.65% | 13.11% | 7.19% | 4.05% |
InternLM2_7B_chat_qlora | 37.86% | 15.23% | 24.34% | 39.71% | 22.66% | 14.26% | 9.21% |
InternLM2_7B_chat_full | 32.45% | 10.82% | 20.17% | 30.48% | 15.67% | 8.84% | 5.02% |
Professional Metrics Evaluation
- For specific metrics and methods, see Professional_evaluation_EN.md
Model | Comprehensiveness | rofessionalism | Authenticity | Safety |
---|---|---|---|---|
InternLM2_7B_chat_qlora | 1.32 | 2.20 | 2.10 | 1.00 |