Browse Source

Update recipes/multilingual/README.md

Co-authored-by: Hamid Shojanazeri <hamid.nazeri2010@gmail.com>
rahul-sarvam 10 months ago
parent
commit
eb7ef4225f
1 changed files with 1 additions and 1 deletions
  1. 1 1
      recipes/multilingual/README.md

+ 1 - 1
recipes/multilingual/README.md

@@ -106,7 +106,7 @@ for para in english_paragraphs:
 ```
 ```
 
 
 ### Train
 ### Train
-Finally, we can start finetuning Llama2 on these datasets by following the [finetuning recipes](https://github.com/rahul-sarvam/llama-recipes/tree/main/recipes/finetuning). Remember to pass the new tokenizer path as an argument to the script: `--tokenizer_name=./extended_tokenizer`.
+Finally, we can start finetuning Llama2 on these datasets by following the [finetuning recipes](https://github.com/meta-llama/llama-recipes/tree/main/recipes/finetuning). Remember to pass the new tokenizer path as an argument to the script: `--tokenizer_name=./extended_tokenizer`.
 
 
 OpenHathi was trained on 64 A100 80GB GPUs. Here are the hyperparameters used and other training details:
 OpenHathi was trained on 64 A100 80GB GPUs. Here are the hyperparameters used and other training details:
 - maximum learning rate: 2e-4
 - maximum learning rate: 2e-4