Browse Source

Address more spell checker issues

Matthias Reso 1 year atrás
parent
commit
e804e2ba3b

+ 1 - 1
docs/single_gpu.md

@@ -4,7 +4,7 @@ To run fine-tuning on a single GPU, we will  make use of two packages
 
 1- [PEFT](https://huggingface.co/blog/peft) methods and in specific using HuggingFace [PEFT](https://github.com/huggingface/peft)library.
 
-2- [bitandbytes](https://github.com/TimDettmers/bitsandbytes) int8 quantization.
+2- [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) int8 quantization.
 
 Given combination of PEFT and Int8 quantization, we would be able to fine_tune a Llama 2 7B model on one consumer grade GPU such as A10.
 

+ 1 - 1
src/llama_recipes/inference/hf_text_generation_inference/README.md

@@ -1,6 +1,6 @@
 # Serving a fine tuned Llama model with HuggingFace text-generation-inference server
 
-This document shows how to serve a fine tuned LLaMA mode with HuggingFace's text-generation-inference server. This option is currently only available for models that were trained using the LoRA method or without using the `--use_peft` argument.
+This document shows how to serve a fine tuned Llama mode with HuggingFace's text-generation-inference server. This option is currently only available for models that were trained using the LoRA method or without using the `--use_peft` argument.
 
 ## Step 0: Merging the weights (Only required if LoRA method was used)