Go to file
2023-12-20 20:25:02 +08:00
data update kg files 2023-11-29 16:39:41 +08:00
figure first commit 2023-10-11 11:51:08 +08:00
kg update kg files 2023-11-29 16:39:41 +08:00
scripts first commit 2023-10-11 11:51:08 +08:00
templates first commit 2023-10-11 11:51:08 +08:00
utils first commit 2023-10-11 11:51:08 +08:00
finetune_kopa.py add data and update code 2023-10-19 20:00:19 +08:00
finetune.py first commit 2023-10-11 11:51:08 +08:00
inference_kopa.py first commit 2023-10-11 11:51:08 +08:00
kopa.py first commit 2023-10-11 11:51:08 +08:00
LICENSE first commit 2023-10-11 11:51:08 +08:00
process_kge.py first commit 2023-10-11 11:51:08 +08:00
README.md update README 2023-11-25 08:45:36 +08:00
requirements.txt Freeze the version of PEFT to 0.3.0 2023-12-20 20:25:02 +08:00

Making Large Language Models Perform Better in Knowledge Graph Completion

license AAAI Pytorch

Large language model (LLM) based knowledge graph completion (KGC) aims to predict the missing triples in the KGs with LLMs and enrich the KGs to become better web infrastructure, which can benefit a lot of web-based automatic services. However, research about LLM-based KGC is limited and lacks effective utilization of LLM's inference capabilities, which ignores the important structural information in KGs and prevents LLMs from acquiring accurate factual knowledge. In this paper, we discuss how to incorporate the helpful KG structural information into the LLMs, aiming to achieve structrual-aware reasoning in the LLMs. We first transfer the existing LLM paradigms to structural-aware settings and further propose a knowledge prefix adapter (KoPA) to fulfill this stated goal. KoPA employs structural embedding pre-training to capture the structural information of entities and relations in the KG. Then KoPA informs the LLMs of the knowledge prefix adapter which projects the structural embeddings into the textual space and obtains virtual knowledge tokens as a prefix of the input prompt. We conduct comprehensive experiments on these structural-aware LLM-based KGC methods and provide an in-depth analysis comparing how the introduction of structural information would be better for LLM's knowledge reasoning ability.

🌈 Model Architecture

Model_architecture

🔬 Dependencies

Our code is developed based on alpaca-lora. Please build the Python following the instruction in Alpaca-lora.

Some core python library config:

  • Python 3.9.16
  • torch 2.0.0
  • transformers 4.28.0
  • peft 0.3.0

📕 Training & Test

  • Note: The current dataset is just a little demonstration to help you run the full pipeline. We will release the full datasets in the future.

  • run KoPA tuning

export WANDB_DISABLED=true
wandb offline
CUDA_VISIBLE_DEVICES=0 nohup python finetune_kopa.py \
    --base_model 'YOUR LLM PATH' \
    --data_path 'data/UMLS-train.json' \
    --output_dir 'YOUR SAVE PATH' \
    --num_epochs 3 \
    --lora_r 64 \
    --learning_rate 3e-4 \
    --batch_size 12 \
    --micro_batch_size 12 \
    --num_prefix 1 \
    --kge_model 'data/UMLS-rotate.pth' \
    --lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' > log.txt &

You may need to fill the LLM path and save path before running.

  • run inference
CUDA_VISIBLE_DEVICES=0 python inference_kopa.py

🤝 Cite:

Please condiser citing this paper if you use the code from our work. Thanks a lot :)

@misc{zhang2023making,
      title={Making Large Language Models Perform Better in Knowledge Graph Completion}, 
      author={Yichi Zhang and Zhuo Chen and Wen Zhang and Huajun Chen},
      year={2023},
      eprint={2310.06671},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}