3366b8a8bf
20b模型的配置文件 |
||
---|---|---|
agents | ||
assets | ||
back/dataset | ||
datasets | ||
demo | ||
deploy | ||
docs | ||
evaluate | ||
front/Wechat small program prototype design | ||
generate_data | ||
rag | ||
scripts | ||
xtuner_config | ||
.gitignore | ||
app.py | ||
download_model.py | ||
LICENSE | ||
README_EN.md | ||
README.md | ||
requirements.txt | ||
setup.py | ||
web_demo-aiwei.py | ||
web_internlm2.py |
EmoLLM - Large Language Model for Mental Health
EmoLLM
简体中文 | English
Explore the documentation of this project »
EmoLLM 2.0 Demo
·
Report a Bug
·
Propose a New Feature
EmoLLM is a series of large language models designed to understand, support and help customers in mental health counseling. It is fine-tuned from the LLM instructions. We really appreciate it if you could give it a star~⭐⭐. The open-sourced configuration is as follows:
Model | Type | link |
---|---|---|
InternLM2_7B_chat | QLORA | internlm2_7b_chat_qlora_e3.py |
InternLM2_7B_chat | full fine-tuning | internlm2_chat_7b_full.py |
InternLM2_7B_base | QLORA | internlm2_7b_base_qlora_e10_M_1e4_32_64.py |
InternLM2_1_8B_chat | full fine-tuning | internlm2_1_8b_full_alpaca_e3.py |
InternLM2_20B_chat | LORA | |
Qwen_7b_chat | QLORA | qwen_7b_chat_qlora_e3.py |
Qwen1_5-0_5B-Chat | full fine-tuning | qwen1_5_0_5_B_full.py |
Baichuan2_13B_chat | QLORA | baichuan2_13b_chat_qlora_alpaca_e3.py |
ChatGLM3_6B | LORA | chatglm3_6b_lora_alpaca_e3.py |
DeepSeek MoE_16B_chat | QLORA | deepseek_moe_16b_chat_qlora_oasst1_e3.py |
Mixtral 8x7B_instruct | QLORA | mixtral_8x7b_instruct_qlora_oasst1_e3.py |
…… | …… | …… |
Everyone is welcome to contribute to this project ~
The Model aims to fully understand and promote the mental health of individuals, groups, and society. This model typically includes the following key components:
- Cognitive factors: Involving an individual's thought patterns, belief systems, cognitive biases, and problem-solving abilities. Cognitive factors significantly impact mental health as they affect how individuals interpret and respond to life events.
- Emotional factors: Including emotion regulation, emotional expression, and emotional experiences. Emotional health is a crucial part of mental health, involving how individuals manage and express their emotions and how they recover from negative emotions.
- Behavioral factors: Concerning an individual's behavior patterns, habits, and coping strategies. This includes stress management skills, social skills, and self-efficacy, which is the confidence in one's abilities.
- Social environment: Comprising external factors such as family, work, community, and cultural background, which have direct and indirect impacts on an individual's mental health.
- Physical health: There is a close relationship between physical and mental health. Good physical health can promote mental health and vice versa.
- Psychological resilience: Refers to an individual's ability to recover from adversity and adapt. Those with strong psychological resilience can bounce back from challenges and learn and grow from them.
- Prevention and intervention measures: The Mental Health Grand Model also includes strategies for preventing psychological issues and promoting mental health, such as psychological education, counseling, therapy, and social support systems.
- Assessment and diagnostic tools: Effective promotion of mental health requires scientific tools to assess individuals' psychological states and diagnose potential psychological issues.
Recent Updates
-
【2024.3.25】 [Mother-like Therapist] is released on Huggingface (https://huggingface.co/brycewang2018/EmoLLM-mother/tree/main)
-
【2024.3.25】 [Daddy-like Boy-Friend] is released on Baidu Paddle-Paddle AI Studio Platform (https://aistudio.baidu.com/community/app/68787)
-
【2024.3.24】 The InternLM2-Base-7B QLoRA fine-tuned model has been released on the OpenXLab and ModelScope platforms. For more details, please refer to InternLM2-Base-7B QLoRA.
-
【2024.3.12】 [aiwei] is released on Baidu Paddle-Paddle AI Studio Platform (https://aistudio.baidu.com/community/app/63335)
-
【2024.3.11】 EmoLLM V2.0 is greatly improved in all scores compared to EmoLLM V1.0. Surpasses the performance of Role-playing ChatGPT on counseling tasks! Click to experience EmoLLM V2.0, update dataset statistics and details, Roadmap
-
【2024.3.9】 Add concurrency acceleration QA pair generation, RAG pipeline
-
【2024.3.3】 Based on InternLM2-7B-chat full fine-tuned version EmoLLM V2.0 open sourced, need two A100*80G, update professional evaluation, see evaluate, update PaddleOCR-based PDF to txt tool scripts, see scripts.
-
【2024.2.29】 Updated objective assessment calculations, see evaluate for details. A series of datasets have also been updated, see datasets for details.
-
【2024.2.27】 Updated English README and a series of datasets (licking dogs and one-round dialogue)
-
【2024.2.23】The "Gentle Lady Psychologist Ai Wei" based on InternLM2_7B_chat_qlora was launched. Click here to obtain the model weights, configuration file, online experience link
-
【2024.2.23】Updated several fine-tuning configurations, added data_pro.json (more quantity, more comprehensive scenarios, richer content) and aiwei.json (dedicated to the gentle lady role-play, featuring Emoji expressions), the "Gentle Lady Psychologist Ai Wei" is coming soon.
-
【2024.2.18】 The full fine-tuned version based on Qwen1_5-0_5B-Chat has been open-sourced. Friends with limited computational resources can now dive in and explore it.
View More
- 【2024.2.6】 Open-sourced based on the Qwen1_5-0_5B-Chat full-scale fine-tuned version, friends with limited computing power can start experimenting~
- 【2024.2.5】 The project has been promoted by the official WeChat account NLP Engineering. Here's the link to the article. Welcome everyone to follow!! 🥳🥳
- 【2024.2.3】 Project Vedio at bilibili 😊
- 【2024.1.27】 Complete data construction documentation, fine-tuning guide, deployment guide, Readme, and other related documents 👏
- 【2024.1.25】 EmoLLM V1.0 has deployed online https://openxlab.org.cn/apps/detail/jujimeizuo/EmoLLM 😀
Honors
- The project won the the Innovation and Creativity Award in the 2024 Puyuan Large Model Series Challenge Spring Competition held by the Shanghai Artificial Intelligence Laboratory
Roadmap
Contents
- EmoLLM - Large Language Model for Mental Health
Pre-development Configuration Requirements.
- A100 40G (specifically for InternLM2_7B_chat + qlora fine-tuning + deepspeed zero2 optimization)
User Guide
- Clone the repo
git clone https://github.com/SmartFlowAI/EmoLLM.git
- Read in sequence or read sections you're interested in:
- Quick Start
- Data Construction
- Fine-tuning Guide
- Deployment Guide
- RAG
- View More Details
🍪Quick start
- Please read Quick Start to see.
📌Data Construction
-
Please read the Data Construction Guide for reference.
-
The dataset used for this fine-tuning can be found at datasets
🎨Fine-tuning Guide
For details, see the fine-tuning guide
🔧Deployment Guide
- Demo deployment: see deployment guide for details.
- Quantitative deployment based on LMDeploy: see deploy
⚙RAG (Retrieval Augmented Generation) Pipeline
- See RAG
Additional Details
Frameworks Used
- Xtuner
- Transformers
- Pytorch
- LMDeploy: for quantitative deployment
- Stremlit: for building demos
- DeepSpeed: for parallel training
- …
How to participate in this project
Contributions make the open-source community an excellent place for learning, inspiration, and creation. Any contribution you make is greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Version control
This project uses Git for version control. You can see the currently available versions in the repository.
Authors (in no particular order)
Username | School/Organization | Remarks | Contributions |
---|---|---|---|
aJupyter | Nankai University, Master's student | DataWhale member | Project initiator |
MING-ZCH | Huazhong University of Science and Technology, Undergraduate student | LLM X Psychology researcher | Project co-leader |
jujimeizuo | Jiangnan University, Master's student | ||
Smiling-Weeping-zhr | Harbin Institute of Technology (Weihai), Undergraduate student | ||
8baby8 | PaddlePaddle Pilot Team Regional Director | Wenxin Large Model core developer | |
zxazys | Nankai University, Master's student | ||
JasonLLLLLLLLLLL | SWUFE (Southwestern University of Finance and Economics) | ||
MrCatAI | AI Mover | ||
ZeyuBa | Institute of Automation, Master's student | ||
aiyinyuedejustin | University of Pennsylvania, Master's student | ||
Nobody-ML | China University of Petroleum (East China), Undergraduate student | ||
chg0901 | MiniSora | Maintainer and Admin of MiniSora | LLM Pre-Training and Fine-Tuning, Model Uploading, Data Cleaning and Docs Translation |
Mxoder | Beihang University, Undergraduate student | ||
Anooyman | Nanjing University of Science and Technology, Master's student | ||
Vicky-3021 | Xidian University, Master's student (Research Year 0) | ||
SantiagoTOP | Taiyuan University of Technology, Master's student | ||
zealot52099 | Individual developer | Data Processing, LLM finetuning and RAG | |
wwwyfff | FuDan University, Master's student | ||
jkhumor | Nankai University, Master's student | RAG | |
lll997150986 | Nankai University, Master's student | Fine Tuning | |
nln-maker | Nankai University, Master's student | Front-end and back-end development | |
dream00001 | Nankai University, Master's student | Front-end and back-end development | |
王几行XING | Peking University, Master's graduate | Data Processing, LLM finetuning, Front-end and back-end development | |
[思在] | Peking University, Master's graduate (Microsoft) | LLM finetuning, Front-end and back-end development |
Copyright Notice
The project is licensed under the MIT License. Please refer to the details LICENSE
Acknowledgments
- Sanbu
- Shanghai Artificial Intelligence Laboratory
- Vanin
- Bloom up (WeChat Official Account Promotion)
- Abu (M.A. in Psychology, Peking University)
- HatBoy
Star History
🌟 Contributors
Communication group
- If it fails, go to the Issue section.