Browse Source

fix typos and spelling errors

Fixing some minor typos and spelling errors which should not affect functionality but improve the overall quality of documentation.
Elias James Howell 1 year ago
parent
commit
b3067b55dc
5 changed files with 13 additions and 13 deletions
  1. 1 1
      docs/FAQ.md
  2. 1 1
      docs/LLM_finetuning.md
  3. 7 7
      docs/mutli_gpu.md
  4. 3 3
      docs/single_gpu.md
  5. 1 1
      inference/hf-text-generation-inference/README.md

+ 1 - 1
docs/FAQ.md

@@ -12,7 +12,7 @@ Here we discuss frequently asked questions that may occur and we found useful al
 
 3. How do PEFT methods work with FSDP in terms of grad requirements/layer freezing?
 
-    We wrap the PEFT modules separate from the transfromer layer in auto_wrapping policy, that would result in PEFT models having `require_grad=True` while the rest of the model is  `require_grad=False`.
+    We wrap the PEFT modules separate from the transformer layer in auto_wrapping policy, that would result in PEFT models having `require_grad=True` while the rest of the model is  `require_grad=False`.
 
 4. Can I add custom datasets?
 

+ 1 - 1
docs/LLM_finetuning.md

@@ -42,7 +42,7 @@ You can also keep most of the layers frozen and only finetune a few layers. Ther
 
 
 
-In this scenario depending on the model size, you might need to go beyond one GPU, especially if your model does not fit into one GPU for training. In this case Llama 2 7B parameter wont fit into one gpu.
+In this scenario depending on the model size, you might need to go beyond one GPU, especially if your model does not fit into one GPU for training. In this case Llama 2 7B parameter won't fit into one gpu.
 The way you want to think about it is, you would need enough GPU memory to keep model parameters, gradients and optimizer states. Where each of these, depending on the precision you are training, can take up multiple times of your parameter count x precision( depending on if its fp32/ 4 bytes, fp16/2 bytes/ bf16/2 bytes).
 For example AdamW optimizer keeps 2 parameters for each of your parameters and in many cases these are kept in fp32. This implies that depending on how many layers you are training/ unfreezing your GPU memory can grow beyond one GPU.
 

+ 7 - 7
docs/mutli_gpu.md

@@ -4,7 +4,7 @@ To run fine-tuning on multi-GPUs, we will  make use of two packages:
 
 1. [PEFT](https://huggingface.co/blog/peft) methods and in particular using the Hugging Face [PEFT](https://github.com/huggingface/peft)library.
 
-2. [FSDP](https://pytorch.org/tutorials/intermediate/FSDP_adavnced_tutorial.html) which helps us parallelize the training over mutiple GPUs. [More details](LLM_finetuning.md/#2-full-partial-parameter-finetuning).
+2. [FSDP](https://pytorch.org/tutorials/intermediate/FSDP_adavnced_tutorial.html) which helps us parallelize the training over multiple GPUs. [More details](LLM_finetuning.md/#2-full-partial-parameter-finetuning).
 
 Given the combination of PEFT and FSDP, we would be able to fine tune a Llama 2 model on multiple GPUs in one node or multi-node.
 
@@ -21,7 +21,7 @@ pip install -r requirements.txt
 
 ## How to run it
 
-Get access to a machine with mutiple GPUs ( in this case we tested with 4 A100 and A10s).
+Get access to a machine with multiple GPUs ( in this case we tested with 4 A100 and A10s).
 This runs with the `samsum_dataset` for summarization application by default.
 
 **Multiple GPUs one node**:
@@ -68,7 +68,7 @@ sbatch multi_node.slurm
 
 ## How to run with different datasets?
 
-Currenty 4 datasets are supported that can be found in [Datasets config file](../configs/datasets.py).
+Currently 4 datasets are supported that can be found in [Datasets config file](../configs/datasets.py).
 
 * `grammar_dataset` : use this [notebook](../ft_datasets/grammar_dataset/grammar_dataset_process.ipynb) to pull and process theJfleg and C4 200M datasets for grammar checking.
 
@@ -134,7 +134,7 @@ save_optimizer: bool=False
 
 * [Datasets config file](../configs/datasets.py) provides the available options for datasets.
 
-* [peft config file](../configs/peft.py) provides the suported PEFT methods and respective settings that can be modified.
+* [peft config file](../configs/peft.py) provides the supported PEFT methods and respective settings that can be modified.
 
 * [FSDP config file](../configs/fsdp.py) provides FSDP settings such as:
 
@@ -147,12 +147,12 @@ save_optimizer: bool=False
 
         * `SHARD_GRAD_OP` that shards gradinets and optimizer states and keeps the parameters after the first `all_gather`. This reduces communication overhead specially if you are using slower networks more specifically beneficial on multi-node cases. This comes with the trade off of higher memory consumption.
 
-        * `NO_SHARD` this is equivalant to DDP, does not shard model parameters, gradinets or optimizer states. It keeps the full parameter after the first `all_gather`.
+        * `NO_SHARD` this is equivalent to DDP, does not shard model parameters, gradinets or optimizer states. It keeps the full parameter after the first `all_gather`.
 
         * `HYBRID_SHARD` available on PyTorch Nightlies. It does FSDP within a node and DDP between nodes. It's for multi-node cases and helpful for slower networks, given your model will fit into one node.
 
 * `checkpoint_type` specifies the state dict checkpoint type for saving the model. `FULL_STATE_DICT` streams state_dict of each model shard from a rank to CPU and assembels the full state_dict on CPU. `SHARDED_STATE_DICT` saves one checkpoint per rank, and enables the re-loading the model in a different world size.
 
-* `fsdp_activation_checkpointing` enables activation checkpoining for FSDP, this saves siginificant amount of memory with the trade off of recomputing itermediate activations during the backward pass. The saved memory can be re-invested in higher batch sizes to increase the throughput. We recommond you use this option.
+* `fsdp_activation_checkpointing` enables activation checkpoining for FSDP, this saves significant amount of memory with the trade off of recomputing itermediate activations during the backward pass. The saved memory can be re-invested in higher batch sizes to increase the throughput. We recommond you use this option.
 
-* `pure_bf16` it moves the  model to `BFloat16` and if `optimizer` is set to `anyprecision` then optimizer states will be kept in `BFloat16` as well. You can use this option if neccessary.
+* `pure_bf16` it moves the  model to `BFloat16` and if `optimizer` is set to `anyprecision` then optimizer states will be kept in `BFloat16` as well. You can use this option if necessary.

+ 3 - 3
docs/single_gpu.md

@@ -40,7 +40,7 @@ The args used in the command above are:
 
 ## How to run with different datasets?
 
-Currenty 4 datasets are supported that can be found in [Datasets config file](../configs/datasets.py).
+Currently 4 datasets are supported that can be found in [Datasets config file](../configs/datasets.py).
 
 * `grammar_dataset` : use this [notebook](../ft_datasets/grammar_dataset/grammar_dataset_process.ipynb) to pull and process theJfleg and C4 200M datasets for grammar checking.
 
@@ -106,6 +106,6 @@ save_optimizer: bool=False
 
 ```
 
-* [Datasets config file](../configs/datasets.py) provides the avaiable options for datasets.
+* [Datasets config file](../configs/datasets.py) provides the available options for datasets.
 
-* [peft config file](../configs/peft.py) provides the suported PEFT methods and respective settings that can be modified.
+* [peft config file](../configs/peft.py) provides the supported PEFT methods and respective settings that can be modified.

+ 1 - 1
inference/hf-text-generation-inference/README.md

@@ -4,7 +4,7 @@ This document shows how to serve a fine tuned LLaMA mode with HuggingFace's text
 
 ## Step 0: Merging the weights (Only required if LoRA method was used) 
 
-In case the model was fine tuned with LoRA mehtod we need to merge the weights of the base model with the adapter weight. For this we can use the script `merge_lora_weights.py` which is located in the same folder as this README file.
+In case the model was fine tuned with LoRA method we need to merge the weights of the base model with the adapter weight. For this we can use the script `merge_lora_weights.py` which is located in the same folder as this README file.
 
 The script takes the base model, the peft weight folder as well as an output as arguments: