Merge pull request #11 from LyricZhao/main

Fix several rendering problems in README
This commit is contained in:
Fuli Luo 2023-11-06 00:18:44 +08:00 committed by GitHub
commit aea198ba70
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 9 additions and 12 deletions

View File

@ -1,7 +1,7 @@
<p align="center">
<img width="1000px" alt="DeepSeek Coder" src="pictures/logo.png">
</p>
<p align="center"><a href="https://www.deepseek.com/">[<img src="pictures/home.png" width="20px"> Homepage]</a> | <a href="https://coder.deepseek.com/">[🤖 Chat with DeepSeek Coder] | <a href="https://huggingface.co/deepseek-ai">[🤗 Models Download]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/guoday/assert/blob/main/QR.png?raw=true">[Wechat(微信)]</a></p>
<p align="center"><a href="https://www.deepseek.com/">[<img src="pictures/home.png" width="20px"> Homepage]</a> | <a href="https://coder.deepseek.com/">[🤖 Chat with DeepSeek Coder]</a> | <a href="https://huggingface.co/deepseek-ai">[🤗 Models Download]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/guoday/assert/blob/main/QR.png?raw=true">[WeChat (微信)]</a></p>
<hr>
@ -38,7 +38,6 @@ And the DeepSeek-Coder-Instruct-33B model after instruction tuning outperforms G
More evaluation details can be found in the [Detailed Evaluation](#5-detailed-evaluation-results).
### 3. Procedure of Data Creation and Model Training
#### Data Creation
@ -59,8 +58,6 @@ More evaluation details can be found in the [Detailed Evaluation](#5-detailed-ev
<img src="pictures/model_pretraining.png" alt="model_pretraining" width="100%">
### 4. How to Use
Before proceeding, you'll need to install the necessary dependencies. You can do this by running the following command:
```
@ -70,7 +67,7 @@ A demo is also available on the [🤗 Hugging Face Space](https://huggingface.co
Here are some examples of how to use our model.
#### 1Code Completion
#### 1) Code Completion
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
@ -97,7 +94,7 @@ def quick_sort(arr):
return quick_sort(left) + [pivot] + quick_sort(right)
```
#### 2Code Insertion
#### 2) Code Insertion
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
@ -124,7 +121,7 @@ This code will output the following result:
for i in range(1, len(arr)):
```
#### 3Chat Model Inference
#### 3) Chat Model Inference
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True)
@ -172,7 +169,7 @@ You are an AI programming assistant, utilizing the Deepseek Coder model, develop
```
#### 4Repository Level Code Completion
#### 4) Repository Level Code Completion
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-6.7b-base", trust_remote_code=True)
@ -265,16 +262,16 @@ In the following scenario, the Deepseek-Coder 6.7B model effectively calls a cla
### 5. Detailed Evaluation Results
The reproducible code for the following evaluation results can be found in the [Evaluation](https://github.com/deepseek-ai/deepseek-coder/tree/main/Evaluation) directory.
#### 1Multilingual HumanEval Benchmark
#### 1) Multilingual HumanEval Benchmark
![HumanEval](pictures/HumanEval.png)
#### 2MBPP Benchmark
#### 2) MBPP Benchmark
<img src="pictures/MBPP.png" alt="MBPP" width="40%">
#### 3DS-1000 Benchmark
#### 3) DS-1000 Benchmark
![DS-1000](pictures/DS-1000.png)
#### 4Program-Aid Math Reasoning Benchmark
#### 4) Program-Aid Math Reasoning Benchmark
![Math](pictures/Math.png)