From cfa072c50e99ea1d5e92d9a3a763934a612c4eef Mon Sep 17 00:00:00 2001 From: JacobLinCool Date: Sat, 17 Feb 2024 20:16:18 +0800 Subject: [PATCH] fix in-page link for detailed eval results --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 321f467..321c9d0 100644 --- a/README.md +++ b/README.md @@ -40,7 +40,7 @@ The result shows that DeepSeek-Coder-Base-33B significantly outperforms existing Surprisingly, our DeepSeek-Coder-Base-7B reaches the performance of CodeLlama-34B. The DeepSeek-Coder-Instruct-33B model after instruction tuning outperforms GPT35-turbo on HumanEval and achieves comparable results with GPT35-turbo on MBPP. -More evaluation details can be found in the [Detailed Evaluation](#5-detailed-evaluation-results). +More evaluation details can be found in the [Detailed Evaluation](#6-detailed-evaluation-results). ### 3. Procedure of Data Creation and Model Training