From 788c0099223da4dcd2789c71c8a713a078834ac0 Mon Sep 17 00:00:00 2001 From: MING_X <119648793+MING-ZCH@users.noreply.github.com> Date: Mon, 11 Mar 2024 15:28:56 +0800 Subject: [PATCH] Update General_evaluation_EN.md --- evaluate/General_evaluation_EN.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/evaluate/General_evaluation_EN.md b/evaluate/General_evaluation_EN.md index ab6db16..92dee52 100644 --- a/evaluate/General_evaluation_EN.md +++ b/evaluate/General_evaluation_EN.md @@ -39,7 +39,7 @@ The `eval.py` script is used to generate the doctor's response and evaluate it, The `metric.py` script contains functions to calculate evaluation metrics, which can be set to evaluate by character level or word level, currently including BLEU and ROUGE scores. -## Test results +## Results Test the data in data.json with the following results: