Update README.md

Update generate GGUF steps
This commit is contained in:
Weitian Leung 2024-01-24 08:53:51 +08:00 committed by GitHub
parent e863eb6b47
commit dc859f9c42
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -333,19 +333,21 @@ DeepSeek Coder utilizes the [HuggingFace Tokenizer](https://huggingface.co/docs/
##### GGUF(llama.cpp)
We have submitted a [PR](https://github.com/ggerganov/llama.cpp/pull/4070) to the popular quantization repository [llama.cpp](https://github.com/ggerganov/llama.cpp) to fully support all HuggingFace pre-tokenizers, including ours.
Update llama.cpp to latest commit (at least contains https://github.com/ggerganov/llama.cpp/pull/3633)
While waiting for the PR to be merged, you can generate your GGUF model using the following steps:
Generate GGUF model using the following steps:
```bash
git clone https://github.com/DOGEwbx/llama.cpp.git
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
git checkout regex_gpt2_preprocess
# set up the environment according to README
make
# or use `cmake` instead of `make` on Windows
python3 -m pip install -r requirements.txt
# generate GGUF model
python convert-hf-to-gguf.py <MODEL_PATH> --outfile <GGUF_PATH> --model-name deepseekcoder
python convert.py <YOUR_MODLE_PATH> --vocab-type bpe --pad-vocab
# use q4_0 quantization as an example
./quantize <GGUF_PATH> <OUTPUT_PATH> q4_0
./main -m <OUTPUT_PATH> -n 128 -p <PROMPT>