소스 검색

Update recipes/multilingual/README.md

Co-authored-by: Hamid Shojanazeri <hamid.nazeri2010@gmail.com>
rahul-sarvam 10 달 전
부모
커밋
eb7ef4225f
1개의 변경된 파일1개의 추가작업 그리고 1개의 파일을 삭제
  1. 1 1
      recipes/multilingual/README.md

+ 1 - 1
recipes/multilingual/README.md

@@ -106,7 +106,7 @@ for para in english_paragraphs:
 ```
 
 ### Train
-Finally, we can start finetuning Llama2 on these datasets by following the [finetuning recipes](https://github.com/rahul-sarvam/llama-recipes/tree/main/recipes/finetuning). Remember to pass the new tokenizer path as an argument to the script: `--tokenizer_name=./extended_tokenizer`.
+Finally, we can start finetuning Llama2 on these datasets by following the [finetuning recipes](https://github.com/meta-llama/llama-recipes/tree/main/recipes/finetuning). Remember to pass the new tokenizer path as an argument to the script: `--tokenizer_name=./extended_tokenizer`.
 
 OpenHathi was trained on 64 A100 80GB GPUs. Here are the hyperparameters used and other training details:
 - maximum learning rate: 2e-4