From 90c1955d384d4838e74418a59c29cd2aa8134b08 Mon Sep 17 00:00:00 2001 From: DeepSeekPH <152240452+DeepSeekPH@users.noreply.github.com> Date: Wed, 29 Nov 2023 22:18:18 +0800 Subject: [PATCH] Update README.md fix math eval typo --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 4843b9a..84eb565 100644 --- a/README.md +++ b/README.md @@ -64,7 +64,7 @@ Introducing DeepSeek LLM, an advanced language model comprising 67 billion param - **Superior General Capabilities:** DeepSeek LLM 67B Base outperforms Llama2 70B Base in areas such as reasoning, coding, math, and Chinese comprehension. - - **Proficient in Coding and Math:** DeepSeek LLM 67B Chat exhibits outstanding performance in coding (HumanEval Pass@1: 73.78) and mathematics (GSM8K 0-shot: 84.1, Math 4-shot: 32.6). It also demonstrates remarkable generalization abilities, as evidenced by its exceptional score of 65 on the Hungarian National High School Exam. + - **Proficient in Coding and Math:** DeepSeek LLM 67B Chat exhibits outstanding performance in coding (HumanEval Pass@1: 73.78) and mathematics (GSM8K 0-shot: 84.1, Math 0-shot: 32.6). It also demonstrates remarkable generalization abilities, as evidenced by its exceptional score of 65 on the Hungarian National High School Exam. - **Mastery in Chinese Language:** Based on our evaluation, DeepSeek LLM 67B Chat surpasses GPT-3.5 in Chinese.