commit 1c8e7915f5f9aa7542ccad0571e0316e8f46ed56 Author: zwd973-deepseek Date: Thu Jan 11 10:35:17 2024 +0800 initial commit diff --git a/LICENSE-CODE b/LICENSE-CODE new file mode 100644 index 0000000..d84f527 --- /dev/null +++ b/LICENSE-CODE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2023 DeepSeek + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/LICENSE-MODEL b/LICENSE-MODEL new file mode 100644 index 0000000..9489e95 --- /dev/null +++ b/LICENSE-MODEL @@ -0,0 +1,91 @@ +DEEPSEEK LICENSE AGREEMENT + +Version 1.0, 23 October 2023 + +Copyright (c) 2023 DeepSeek + +Section I: PREAMBLE + +Large generative models are being widely adopted and used, and have the potential to transform the way individuals conceive and benefit from AI or ML technologies. + +Notwithstanding the current and potential benefits that these artifacts can bring to society at large, there are also concerns about potential misuses of them, either due to their technical limitations or ethical considerations. + +In short, this license strives for both the open and responsible downstream use of the accompanying model. When it comes to the open character, we took inspiration from open source permissive licenses regarding the grant of IP rights. Referring to the downstream responsible use, we added use-based restrictions not permitting the use of the model in very specific scenarios, in order for the licensor to be able to enforce the license in case potential misuses of the Model may occur. At the same time, we strive to promote open and responsible research on generative models for content generation. + +Even though downstream derivative versions of the model could be released under different licensing terms, the latter will always have to include - at minimum - the same use-based restrictions as the ones in the original license (this license). We believe in the intersection between open and responsible AI development; thus, this agreement aims to strike a balance between both in order to enable responsible open-science in the field of AI. + +This License governs the use of the model (and its derivatives) and is informed by the model card associated with the model. + +NOW THEREFORE, You and DeepSeek agree as follows: + +1. Definitions +"License" means the terms and conditions for use, reproduction, and Distribution as defined in this document. +"Data" means a collection of information and/or content extracted from the dataset used with the Model, including to train, pretrain, or otherwise evaluate the Model. The Data is not licensed under this License. +"Output" means the results of operating a Model as embodied in informational content resulting therefrom. +"Model" means any accompanying machine-learning based assemblies (including checkpoints), consisting of learnt weights, parameters (including optimizer states), corresponding to the model architecture as embodied in the Complementary Material, that have been trained or tuned, in whole or in part on the Data, using the Complementary Material. +"Derivatives of the Model" means all modifications to the Model, works based on the Model, or any other model which is created or initialized by transfer of patterns of the weights, parameters, activations or output of the Model, to the other model, in order to cause the other model to perform similarly to the Model, including - but not limited to - distillation methods entailing the use of intermediate data representations or methods based on the generation of synthetic data by the Model for training the other model. +"Complementary Material" means the accompanying source code and scripts used to define, run, load, benchmark or evaluate the Model, and used to prepare data for training or evaluation, if any. This includes any accompanying documentation, tutorials, examples, etc, if any. +"Distribution" means any transmission, reproduction, publication or other sharing of the Model or Derivatives of the Model to a third party, including providing the Model as a hosted service made available by electronic or other remote means - e.g. API-based or web access. +"DeepSeek" (or "we") means Beijing DeepSeek Artificial Intelligence Fundamental Technology Research Co., Ltd., Hangzhou DeepSeek Artificial Intelligence Fundamental Technology Research Co., Ltd. and/or any of their affiliates. +"You" (or "Your") means an individual or Legal Entity exercising permissions granted by this License and/or making use of the Model for whichever purpose and in any field of use, including usage of the Model in an end-use application - e.g. chatbot, translator, etc. +"Third Parties" means individuals or legal entities that are not under common control with DeepSeek or You. + +Section II: INTELLECTUAL PROPERTY RIGHTS + +Both copyright and patent grants apply to the Model, Derivatives of the Model and Complementary Material. The Model and Derivatives of the Model are subject to additional terms as described in Section III. + +2. Grant of Copyright License. Subject to the terms and conditions of this License, DeepSeek hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare, publicly display, publicly perform, sublicense, and distribute the Complementary Material, the Model, and Derivatives of the Model. + +3. Grant of Patent License. Subject to the terms and conditions of this License and where and as applicable, DeepSeek hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this paragraph) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Model and the Complementary Material, where such license applies only to those patent claims licensable by DeepSeek that are necessarily infringed by its contribution(s). If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Model and/or Complementary Material constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for the Model and/or works shall terminate as of the date such litigation is asserted or filed. + + +Section III: CONDITIONS OF USAGE, DISTRIBUTION AND REDISTRIBUTION + +4. Distribution and Redistribution. You may host for Third Party remote access purposes (e.g. software-as-a-service), reproduce and distribute copies of the Model or Derivatives of the Model thereof in any medium, with or without modifications, provided that You meet the following conditions: +a. Use-based restrictions as referenced in paragraph 5 MUST be included as an enforceable provision by You in any type of legal agreement (e.g. a license) governing the use and/or distribution of the Model or Derivatives of the Model, and You shall give notice to subsequent users You Distribute to, that the Model or Derivatives of the Model are subject to paragraph 5. This provision does not apply to the use of Complementary Material. +b. You must give any Third Party recipients of the Model or Derivatives of the Model a copy of this License; +c. You must cause any modified files to carry prominent notices stating that You changed the files; +d. You must retain all copyright, patent, trademark, and attribution notices excluding those notices that do not pertain to any part of the Model, Derivatives of the Model. +e. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions - respecting paragraph 4.a. – for use, reproduction, or Distribution of Your modifications, or for any such Derivatives of the Model as a whole, provided Your use, reproduction, and Distribution of the Model otherwise complies with the conditions stated in this License. + +5. Use-based restrictions. The restrictions set forth in Attachment A are considered Use-based restrictions. Therefore You cannot use the Model and the Derivatives of the Model for the specified restricted uses. You may use the Model subject to this License, including only for lawful purposes and in accordance with the License. Use may include creating any content with, finetuning, updating, running, training, evaluating and/or reparametrizing the Model. You shall require all of Your users who use the Model or a Derivative of the Model to comply with the terms of this paragraph (paragraph 5). + +6. The Output You Generate. Except as set forth herein, DeepSeek claims no rights in the Output You generate using the Model. You are accountable for the Output you generate and its subsequent uses. No use of the output can contravene any provision as stated in the License. + +Section IV: OTHER PROVISIONS + +7. Updates and Runtime Restrictions. To the maximum extent permitted by law, DeepSeek reserves the right to restrict (remotely or otherwise) usage of the Model in violation of this License. + +8. Trademarks and related. Nothing in this License permits You to make use of DeepSeek’ trademarks, trade names, logos or to otherwise suggest endorsement or misrepresent the relationship between the parties; and any rights not expressly granted herein are reserved by DeepSeek. + +9. Personal information, IP rights and related. This Model may contain personal information and works with IP rights. You commit to complying with applicable laws and regulations in the handling of personal information and the use of such works. Please note that DeepSeek's license granted to you to use the Model does not imply that you have obtained a legitimate basis for processing the related information or works. As an independent personal information processor and IP rights user, you need to ensure full compliance with relevant legal and regulatory requirements when handling personal information and works with IP rights that may be contained in the Model, and are willing to assume solely any risks and consequences that may arise from that. + +10. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, DeepSeek provides the Model and the Complementary Material on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Model, Derivatives of the Model, and the Complementary Material and assume any risks associated with Your exercise of permissions under this License. + +11. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall DeepSeek be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Model and the Complementary Material (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if DeepSeek has been advised of the possibility of such damages. + +12. Accepting Warranty or Additional Liability. While redistributing the Model, Derivatives of the Model and the Complementary Material thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of DeepSeek, and only if You agree to indemnify, defend, and hold DeepSeek harmless for any liability incurred by, or claims asserted against, DeepSeek by reason of your accepting any such warranty or additional liability. + +13. If any provision of this License is held to be invalid, illegal or unenforceable, the remaining provisions shall be unaffected thereby and remain valid as if such provision had not been set forth herein. + +14. Governing Law and Jurisdiction. This agreement will be governed and construed under PRC laws without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this agreement. The courts located in the domicile of Hangzhou DeepSeek Artificial Intelligence Fundamental Technology Research Co., Ltd. shall have exclusive jurisdiction of any dispute arising out of this agreement. + +END OF TERMS AND CONDITIONS + +Attachment A + +Use Restrictions + +You agree not to use the Model or Derivatives of the Model: + +- In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party; +- For military use in any way; +- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; +- To generate or disseminate verifiably false information and/or content with the purpose of harming others; +- To generate or disseminate inappropriate content subject to applicable regulatory requirements; +- To generate or disseminate personal identifiable information without due authorization or for unreasonable use; +- To defame, disparage or otherwise harass others; +- For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation; +- For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics; +- To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm; +- For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories. diff --git a/README.md b/README.md new file mode 100644 index 0000000..1055068 --- /dev/null +++ b/README.md @@ -0,0 +1,275 @@ + + + + +
+ DeepSeek LLM +
+
+
+ + + Homepage + + + Chat + + + Hugging Face + + +
+ +
+ + + Discord + + + Wechat + + + Twitter Follow + + +
+ +
+ + + Code License + + + Model License + +
+ + +

+ Model Download | + Evaluation Results | + Quick Start | + License | + Citation +

+ + +## 1. Introduction + +DeepSeekMoE 16B is a Mixture-of-Experts (MoE) language model with 16.4B parameters. +It employs an innovative MoE architecture, which involves two principal strategies: fine-grained expert segmentation and shared experts isolation. +It is trained from scratch on 2T tokens, and exhibits comparable performance with DeekSeek 7B and LLaMA2 7B, with only about 40% of computations. +For research purposes, we release the model checkpoints of DeepSeekMoE 16B Base and DeepSeekMoE 16B Chat to the public, which can be deployed on a single GPU with 40GB of memory without the need for quantization. + +## 2. Evaluation Results + +### DeepSeekMoE 16B Base + +We evaluate DeepSeekMoE 16B on various benchmarks and compare it with a series of models, as shown in the following. + +- Comparison with open source models on the Open LLM Leaderboard. DeepSeekMoE 16B consistently outperforms models with a similar number of activated parameters by a large margin, and achieves comparable performance with LLaMA2 7B, which has approximately 2.5 times the activated parameters. + +

+table +

+ +- Comparison with DeepSeek 7B on our internal benchmarks. DeepSeek 7B is a dense model trained on the same corpus as DeepSeekMoE 16B. With only 40.5% of computations, DeepSeekMoE 16B achieves comparable performance with DeepSeek 7B. + +

+table +

+ +- Comparison with LLaMA2 7B on our internal benchmarks. With only 39.6% of computations, DeepSeekMoE 16B outperforms LLaMA2 7B on the majority of benchmarks. + +

+table +

+ +### DeepSeekMoE 16B Chat + +We also evaluate DeepSeekMoE 16B Chat on various benchmarks and compare it with DeepSeek 7B Chat and LLaMA2 7B SFT. All of the compared models follow the same fine-tuning setting and data for fair comparison. +The evaluation results are shown in the following. With only about 40% of computations, DeepSeekMoE 16B Chat achieves comparable or better performance than DeepSeek 7B Chat and LLaMA2 7B SFT. + +

+table +

+ +## 3. Model Downloads + +We release the DeepSeekMoE 16B, including both base and chat models, to the public. To support a broader and more diverse range of research within both academic and commercial communities. Please **note** that the use of this model is subject to the terms outlined in [License section](#5-license). Commercial usage is permitted under these terms. + +### Huggingface + +| Model | Sequence Length | Download | +|:---------------------:|:---------------:|:-----------------------------------------------------------------------:| +| DeepSeekMoE 16B Base | 4096 | 🤗 [HuggingFace](https://huggingface.co/deepseek-ai/deepseek-moe-16b-base) | +| DeepSeekMoE 16B Chat | 4096 | 🤗 [HuggingFace](https://huggingface.co/deepseek-ai/deepseek-moe-16b-chat) | + +## 4. Quick Start +### Installation + +On the basis of `Python >= 3.8` environment, install the necessary dependencies by running the following command: + +```shell +pip install -r requirements.txt +``` + +### Inference with Huggingface's Transformers + +You can directly employ [Huggingface's Transformers](https://github.com/huggingface/transformers) for model inference. + +**Text Completion** + +```python +import torch +from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig + +model_name = "deepseek-ai/deepseek-ai/deepseek-moe-16b-base" +tokenizer = AutoTokenizer.from_pretrained(model_name) +model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto") +model.generation_config = GenerationConfig.from_pretrained(model_name) +model.generation_config.pad_token_id = model.generation_config.eos_token_id + +text = "An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is" +inputs = tokenizer(text, return_tensors="pt") +outputs = model.generate(**inputs.to(model.device), max_new_tokens=100) + +result = tokenizer.decode(outputs[0], skip_special_tokens=True) +print(result) +``` + +**Chat Completion** + +```python +import torch +from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig + +model_name = "deepseek-ai/deepseek-moe-16b-chat" +tokenizer = AutoTokenizer.from_pretrained(model_name) +model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto") +model.generation_config = GenerationConfig.from_pretrained(model_name) +model.generation_config.pad_token_id = model.generation_config.eos_token_id + +messages = [ + {"role": "user", "content": "Who are you?"} +] +input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt") +outputs = model.generate(input_tensor.to(model.device), max_new_tokens=100) + +result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True) +print(result) +``` + +Avoiding the use of the provided function `apply_chat_template`, you can also interact with our model following the sample template. Note that `messages` should be replaced by your input. + +``` +User: {messages[0]['content']} + +Assistant: {messages[1]['content']}<|end▁of▁sentence|>User: {messages[2]['content']} + +Assistant: +``` + +**Note:** By default (`add_special_tokens=True`), our tokenizer automatically adds a `bos_token` (`<|begin▁of▁sentence|>`) before the input text. Additionally, since the system prompt is not compatible with this version of our models, we DO NOT RECOMMEND including the system prompt in your input. + +### How to Fine-tune DeepSeekMoE + +We provide script `fintune/finetune.py` for users to finetune our models on downstream tasks. + +The script supports the training with [DeepSpeed](https://github.com/microsoft/DeepSpeed). You need install required packages by: + +```bash +pip install -r requirements.txt +``` + +Please follow [Sample Dataset Format](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) to prepare your training data. +Each item has two required fields `instruction` and `output`. + +After data preparation, you can use the sample shell script to finetune the DeepSeekMoE model. +Remember to specify `DATA_PATH`, `OUTPUT_PATH`. +And please choose appropriate hyper-parameters(e.g., `learning_rate`, `per_device_train_batch_size`) according to your scenario. + +```bash +DATA_PATH="" +OUTPUT_PATH="" +MODEL_PATH="" + +cd finetune +deepspeed finetune.py \ + --model_name_or_path $MODEL_PATH \ + --data_path $DATA_PATH \ + --output_dir $OUTPUT_PATH \ + --num_train_epochs 3 \ + --model_max_length 1024 \ + --per_device_train_batch_size 16 \ + --per_device_eval_batch_size 1 \ + --gradient_accumulation_steps 4 \ + --evaluation_strategy "no" \ + --save_strategy "steps" \ + --save_steps 100 \ + --save_total_limit 100 \ + --learning_rate 2e-5 \ + --warmup_steps 10 \ + --logging_steps 1 \ + --lr_scheduler_type "cosine" \ + --gradient_checkpointing True \ + --report_to "tensorboard" \ + --deepspeed configs/ds_config_zero3.json \ + --bf16 True \ + --use_lora False +``` + +You can also finetune the model with 4/8-bits qlora, feel free to try it. +```bash +DATA_PATH="" +OUTPUT_PATH="" +MODEL_PATH="" + +cd finetune +deepspeed finetune.py \ + --model_name_or_path $MODEL_PATH \ + --data_path $DATA_PATH \ + --output_dir $OUTPUT_PATH \ + --num_train_epochs 3 \ + --model_max_length 1024 \ + --per_device_train_batch_size 16 \ + --per_device_eval_batch_size 1 \ + --gradient_accumulation_steps 4 \ + --evaluation_strategy "no" \ + --save_strategy "steps" \ + --save_steps 100 \ + --save_total_limit 100 \ + --learning_rate 2e-5 \ + --warmup_steps 10 \ + --logging_steps 1 \ + --lr_scheduler_type "cosine" \ + --gradient_checkpointing True \ + --report_to "tensorboard" \ + --deepspeed configs/ds_config_zero2_no_offload.json \ + --bf16 True \ + --use_lora True \ + --bits 4 \ + --max_grad_norm 0.3 \ + --double_quant \ + --lora_r 64 \ + --lora_alpha 16 \ + --quant_type nf4 \ +``` + +## 5. License +This code repository is licensed under the MIT License. The use of DeepSeekMoE models is subject to the Model License. DeepSeekMoE supports commercial use. + +See the [LICENSE-CODE](LICENSE-CODE) and [LICENSE-MODEL](LICENSE-MODEL) for more details. + +## 6. Citation + +``` +@article{deepseekmoe, + [coming soon] +} +``` + + +## 7. Contact + +If you have any questions, please raise an issue or contact us at [service@deepseek.com](mailto:service@deepseek.com). diff --git a/finetune/configs/ds_config_zero2_no_offload.json b/finetune/configs/ds_config_zero2_no_offload.json new file mode 100644 index 0000000..b58ab8e --- /dev/null +++ b/finetune/configs/ds_config_zero2_no_offload.json @@ -0,0 +1,22 @@ +{ + "bf16": { + "enabled": true + }, + + "zero_optimization": { + "stage": 2, + "allgather_partitions": true, + "allgather_bucket_size": 1e8, + "overlap_comm": true, + "reduce_scatter": true, + "reduce_bucket_size": 1e8, + "contiguous_gradients": true + }, + + "gradient_accumulation_steps": "auto", + "gradient_clipping": "auto", + "steps_per_print": 2000, + "train_batch_size": "auto", + "train_micro_batch_size_per_gpu": "auto", + "wall_clock_breakdown": false +} \ No newline at end of file diff --git a/finetune/configs/ds_config_zero3.json b/finetune/configs/ds_config_zero3.json new file mode 100644 index 0000000..73f3b5f --- /dev/null +++ b/finetune/configs/ds_config_zero3.json @@ -0,0 +1,51 @@ +{ + "bf16": { + "enabled": "auto" + }, + "optimizer": { + "type": "AdamW", + "params": { + "lr": "auto", + "betas": "auto", + "eps": "auto", + "weight_decay": "auto" + } + }, + + "scheduler": { + "type": "WarmupLR", + "params": { + "warmup_min_lr": "auto", + "warmup_max_lr": "auto", + "warmup_num_steps": "auto" + } + }, + + "zero_optimization": { + "stage": 3, + "offload_optimizer": { + "device": "cpu", + "pin_memory": true + }, + "offload_param": { + "device": "cpu", + "pin_memory": true + }, + "overlap_comm": true, + "contiguous_gradients": true, + "sub_group_size": 1e9, + "reduce_bucket_size": "auto", + "stage3_prefetch_bucket_size": "auto", + "stage3_param_persistence_threshold": "auto", + "stage3_max_live_parameters": 1e9, + "stage3_max_reuse_distance": 1e9, + "stage3_gather_16bit_weights_on_model_save": true + }, + + "gradient_accumulation_steps": "auto", + "gradient_clipping": "auto", + "steps_per_print": 20, + "train_batch_size": "auto", + "train_micro_batch_size_per_gpu": "auto", + "wall_clock_breakdown": false +} \ No newline at end of file diff --git a/finetune/finetune.py b/finetune/finetune.py new file mode 100644 index 0000000..71c2700 --- /dev/null +++ b/finetune/finetune.py @@ -0,0 +1,322 @@ +import copy +import random +from dataclasses import dataclass, field +from typing import Optional, Dict, Sequence +import logging +import os + +import torch +import torch.distributed +import transformers +from transformers import Trainer, BitsAndBytesConfig +from datasets import load_dataset +import datasets +import numpy as np +from peft import LoraConfig, TaskType, get_peft_model, prepare_model_for_kbit_training, PeftModel +from peft.tuners.lora import LoraLayer +from transformers.trainer_utils import PREFIX_CHECKPOINT_DIR + +IGNORE_INDEX = -100 +EOT_TOKEN = "<|EOT|>" +logger = logging.getLogger(__name__) + +def build_instruction_prompt(instruction: str): + return ''' +You are an AI assistant, developed by DeepSeek Company. For politically sensitive questions, security and privacy issues, you will refuse to answer. +### Instruction: +{} +### Response: +'''.format(instruction.strip()).lstrip() + +@dataclass +class ModelArguments: + trainable : Optional[str] = field(default="q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj") + lora_rank : Optional[int] = field(default=8) + lora_dropout : Optional[float] = field(default=0.1) + lora_alpha : Optional[float] = field(default=32.) + modules_to_save : Optional[str] = field(default="embed_tokens,lm_head") + use_lora : Optional[bool] = field(default=False) + model_name_or_path: Optional[str] = field(default="deepseek-ai/deepseek-moe-16b") + attn_implementation : Optional[str] = field(default="flash_attention_2") + double_quant: bool = field( + default=True, + metadata={"help": "Compress the quantization statistics through double quantization."} + ) + quant_type: str = field( + default="nf4", + metadata={"help": "Quantization data type to use. Should be one of `fp4` or `nf4`."} + ) + bits: int = field( + default=16, + metadata={"help": "How many bits to use."} + ) + +@dataclass +class DataArguments: + data_path: str = field(default=None, metadata={"help": "Path to the training data."}) + + +@dataclass +class TrainingArguments(transformers.TrainingArguments): + + cache_dir: Optional[str] = field(default=None) + optim: str = field(default="adamw_torch") + model_max_length: int = field( + default=512, + metadata={"help": "Maximum sequence length. Sequences will be right padded (and possibly truncated)."}, + ) + +class SavePeftModelCallback(transformers.TrainerCallback): + def save_model(self, args, state, kwargs): + logger.info('Saving PEFT checkpoint...') + if state.best_model_checkpoint is not None: + checkpoint_folder = os.path.join(state.best_model_checkpoint, "adapter_model") + else: + checkpoint_folder = os.path.join(args.output_dir, f"{PREFIX_CHECKPOINT_DIR}-{state.global_step}") + + peft_model_path = os.path.join(checkpoint_folder, "adapter_model") + kwargs["model"].save_pretrained(peft_model_path) + kwargs["tokenizer"].save_pretrained(peft_model_path) + + def on_save(self, args, state, control, **kwargs): + self.save_model(args, state, kwargs) + return control + + def on_train_end(self, args, state, control, **kwargs): + def touch(fname, times=None): + with open(fname, 'a'): + os.utime(fname, times) + touch(os.path.join(args.output_dir, 'completed')) + self.save_model(args, state, kwargs) + +def get_last_checkpoint(checkpoint_dir): + if os.path.isdir(checkpoint_dir): + is_completed = os.path.exists(os.path.join(checkpoint_dir, 'completed')) + if is_completed: return None # already finished + max_step = 0 + for filename in os.listdir(checkpoint_dir): + if os.path.isdir(os.path.join(checkpoint_dir, filename)) and filename.startswith(PREFIX_CHECKPOINT_DIR): + max_step = max(max_step, int(filename.replace(PREFIX_CHECKPOINT_DIR + '-', ''))) + if max_step == 0: return None + latest_ckpt_dir = os.path.join(checkpoint_dir, f'{PREFIX_CHECKPOINT_DIR}-{max_step}') + logger.info(f"Found a previous checkpoint at: {checkpoint_dir}") + return latest_ckpt_dir + return None # first training + +def safe_save_model_for_hf_trainer(trainer: transformers.Trainer, output_dir: str): + """Collects the state dict and dump to disk.""" + state_dict = trainer.model.state_dict() + if trainer.args.should_save: + cpu_state_dict = {key: value.cpu() for key, value in state_dict.items()} + del state_dict + trainer._save(output_dir, state_dict=cpu_state_dict) # noqa + + +def _tokenize_fn(strings: Sequence[str], tokenizer: transformers.PreTrainedTokenizer) -> Dict: + """Tokenize a list of strings.""" + tokenized_list = [ + tokenizer( + text, + # return_tensors="pt", + max_length=tokenizer.model_max_length, + truncation=True, + ) + for text in strings + ] + input_ids = labels = [np.array(tokenized.input_ids) for tokenized in tokenized_list] + input_ids_lens = labels_lens = [ + len(tokenized.input_ids) for tokenized in tokenized_list + ] + + return dict( + input_ids=input_ids, + labels=labels, + input_ids_lens=input_ids_lens, + labels_lens=labels_lens, + ) + + +def preprocess( + sources: Sequence[str], + targets: Sequence[str], + tokenizer: transformers.PreTrainedTokenizer, +) -> Dict: + """Preprocess the data by tokenizing.""" + examples = [s + t for s, t in zip(sources, targets)] + examples_tokenized, sources_tokenized = [_tokenize_fn(strings, tokenizer) for strings in (examples, sources)] + input_ids = examples_tokenized["input_ids"] + labels = copy.deepcopy(input_ids) + for label, source_len in zip(labels, sources_tokenized["input_ids_lens"]): + label[:source_len] = IGNORE_INDEX + return dict(input_ids=input_ids, labels=labels) + +@dataclass +class DataCollatorForSupervisedDataset(object): + """Collate examples for supervised fine-tuning.""" + tokenizer: transformers.PreTrainedTokenizer + + def __call__(self, instances: Sequence[Dict]) -> Dict[str, torch.Tensor]: + input_ids, labels = tuple([instance[key] for instance in instances] for key in ("input_ids", "labels")) + input_ids = [torch.tensor(x) for x in input_ids] + input_ids = torch.nn.utils.rnn.pad_sequence( + input_ids, batch_first=True, padding_value=self.tokenizer.pad_token_id + ) + labels = [torch.tensor(x) for x in labels] + labels = torch.nn.utils.rnn.pad_sequence(labels, batch_first=True, padding_value=IGNORE_INDEX) + + return dict( + input_ids=input_ids, + labels=labels, + attention_mask=input_ids.ne(self.tokenizer.pad_token_id), + ) + +def train_tokenize_function(examples, tokenizer): + sources = [ + build_instruction_prompt(instruction) + for instruction in examples['instruction'] + ] + targets = [f"{output}\n{EOT_TOKEN}" for output in examples['output']] + data_dict = preprocess(sources, targets, tokenizer) + return data_dict + +def build_model(model_args, training_args, checkpoint_dir): + if not model_args.use_lora: assert model_args.bits in [16, 32] + compute_dtype = (torch.bfloat16 if training_args.bf16 else torch.float16) + model = transformers.AutoModelForCausalLM.from_pretrained( + model_args.model_name_or_path, + load_in_4bit=model_args.bits == 4, + load_in_8bit=model_args.bits == 8, + quantization_config=BitsAndBytesConfig( + load_in_4bit=model_args.bits == 4, + load_in_8bit=model_args.bits == 8, + llm_int8_threshold=6.0, + llm_int8_has_fp16_weight=False, + bnb_4bit_compute_dtype=compute_dtype, + bnb_4bit_use_double_quant=model_args.double_quant, + bnb_4bit_quant_type=model_args.quant_type, + ), + torch_dtype=compute_dtype, + trust_remote_code=True, + ) + + if compute_dtype == torch.float16 and model_args.bits == 4: + if torch.cuda.is_bf16_supported(): + logger.info('='*80) + logger.info('Your GPU supports bfloat16, you can accelerate training with the argument --bf16') + logger.info('='*80) + setattr(model, 'model_parallel', True) + setattr(model, 'is_parallelizable', True) + model.config.torch_dtype=torch.bfloat16 if training_args.bf16 else torch.float32 + # Tokenizer + + if model_args.use_lora and model_args.bits < 16: + model = prepare_model_for_kbit_training(model, use_gradient_checkpointing=training_args.gradient_checkpointing) + + if model_args.use_lora: + if checkpoint_dir is not None: + logger.info(f"Loading adapters from {checkpoint_dir}.") + # os.path.join(checkpoint_dir, 'adapter_model') + model = PeftModel.from_pretrained(model, checkpoint_dir, is_trainable=True) + else: + logger.info(f'Init LoRA modules...') + target_modules = model_args.trainable.split(',') + modules_to_save = model_args.modules_to_save + if modules_to_save is not None: + modules_to_save = modules_to_save.split(',') + lora_rank = model_args.lora_rank + lora_dropout = model_args.lora_dropout + lora_alpha = model_args.lora_alpha + peft_config = LoraConfig( + task_type=TaskType.CAUSAL_LM, + target_modules=target_modules, + inference_mode=False, + r=lora_rank, lora_alpha=lora_alpha, + lora_dropout=lora_dropout, + modules_to_save=modules_to_save) + model = get_peft_model(model, peft_config) + + for name, module in model.named_modules(): + if isinstance(module, LoraLayer): + if training_args.bf16: + module = module.to(torch.bfloat16) + if 'norm' in name or 'gate' in name: + module = module.to(torch.float32) + if 'lm_head' in name or 'embed_tokens' in name: + if hasattr(module, 'weight'): + if training_args.bf16 and module.weight.dtype == torch.float32: + module = module.to(torch.bfloat16) + return model + +def train(): + parser = transformers.HfArgumentParser((ModelArguments, DataArguments, TrainingArguments)) + model_args, data_args, training_args = parser.parse_args_into_dataclasses() + log_level = training_args.get_process_log_level() + logger.setLevel(log_level) + datasets.utils.logging.set_verbosity(log_level) + transformers.utils.logging.set_verbosity(log_level) + transformers.utils.logging.enable_default_handler() + transformers.utils.logging.enable_explicit_format() + if training_args.local_rank == 0: + logger.info('='*100) + logger.info(training_args) + + tokenizer = transformers.AutoTokenizer.from_pretrained( + model_args.model_name_or_path, + model_max_length=training_args.model_max_length, + padding_side="right", + use_fast=True, + trust_remote_code=True + ) + + logger.info("PAD Token:", tokenizer.pad_token, tokenizer.pad_token_id) + logger.info("BOS Token", tokenizer.bos_token, tokenizer.bos_token_id) + logger.info("EOS Token", tokenizer.eos_token, tokenizer.eos_token_id) + + if training_args.local_rank == 0: + logger.info("Load tokenizer from {} over.".format(model_args.model_name_or_path)) + + resume_from_checkpoint_dir = get_last_checkpoint(training_args.output_dir) + model = build_model(model_args, training_args, resume_from_checkpoint_dir) + + raw_train_datasets = load_dataset( + 'parquet', + data_files=data_args.data_path, + split="train", + cache_dir=training_args.cache_dir + ) + if training_args.local_rank > 0: + torch.distributed.barrier() + + train_dataset = raw_train_datasets.map( + train_tokenize_function, + batched=True, + batch_size=3000, + num_proc=32, + remove_columns=raw_train_datasets.column_names, + load_from_cache_file=True, # not args.overwrite_cache + desc="Running Encoding", + fn_kwargs={ "tokenizer": tokenizer } + ) + + if training_args.local_rank == 0: + torch.distributed.barrier() + + if training_args.local_rank == 0: + logger.info("Training dataset samples:", len(train_dataset)) + for index in random.sample(range(len(train_dataset)), 3): + logger.info(f"Sample {index} of the training set: {train_dataset[index]['input_ids']}, {train_dataset[index]['labels']}.") + logger.info(f"Sample {index} of the training set: {tokenizer.decode(list(train_dataset[index]['input_ids']))}.") + + data_collator = DataCollatorForSupervisedDataset(tokenizer=tokenizer) + data_module = dict(train_dataset=train_dataset, eval_dataset=None, data_collator=data_collator) + + trainer = Trainer(model=model, tokenizer=tokenizer, args=training_args, **data_module) + if model_args.use_lora: + trainer.add_callback(SavePeftModelCallback) + trainer.train(resume_from_checkpoint = resume_from_checkpoint_dir) + trainer.save_state() + if not model_args.use_lora: + safe_save_model_for_hf_trainer(trainer=trainer, output_dir=training_args.output_dir) + +if __name__ == "__main__": + train() diff --git a/images/badge.svg b/images/badge.svg new file mode 100644 index 0000000..1551f56 --- /dev/null +++ b/images/badge.svg @@ -0,0 +1 @@ +DeepSeek: HomepageDeepSeekHomepage diff --git a/images/evaluation_deepseekmoe16b_base_1.jpg b/images/evaluation_deepseekmoe16b_base_1.jpg new file mode 100644 index 0000000..3d8991f Binary files /dev/null and b/images/evaluation_deepseekmoe16b_base_1.jpg differ diff --git a/images/evaluation_deepseekmoe16b_base_2.jpg b/images/evaluation_deepseekmoe16b_base_2.jpg new file mode 100644 index 0000000..976cd70 Binary files /dev/null and b/images/evaluation_deepseekmoe16b_base_2.jpg differ diff --git a/images/evaluation_deepseekmoe16b_base_openllm.jpg b/images/evaluation_deepseekmoe16b_base_openllm.jpg new file mode 100644 index 0000000..fe05e74 Binary files /dev/null and b/images/evaluation_deepseekmoe16b_base_openllm.jpg differ diff --git a/images/evaluation_deepseekmoe16b_chat.jpg b/images/evaluation_deepseekmoe16b_chat.jpg new file mode 100644 index 0000000..2fd4253 Binary files /dev/null and b/images/evaluation_deepseekmoe16b_chat.jpg differ diff --git a/images/logo.svg b/images/logo.svg new file mode 100644 index 0000000..4254944 --- /dev/null +++ b/images/logo.svg @@ -0,0 +1,22 @@ + + + Created with Pixso. + + + + + + + + + + + + + + + + + + + diff --git a/images/qr.jpeg b/images/qr.jpeg new file mode 100644 index 0000000..d0152d1 Binary files /dev/null and b/images/qr.jpeg differ diff --git a/requirements.txt b/requirements.txt new file mode 100644 index 0000000..fa73e71 --- /dev/null +++ b/requirements.txt @@ -0,0 +1,11 @@ +torch>=2.0.1 +tokenizers>=0.14.0 +transformers>=4.36.2 +accelerate +attrdict +tqdm + +deepspeed +datasets +tensorboardX +peft