albertodepaola
|
79c2dd0355
Merge pull request #4 from tryrobbo/main
|
7 mesiacov pred |
Hamid Shojanazeri
|
d717be8ad4
Merge pull request #3 from albertodepaola/l3p/finetuning_inference_chat_mods
|
7 mesiacov pred |
Matthias Reso
|
ab254d121c
7b -> 8b
|
7 mesiacov pred |
Matthias Reso
|
43cb6a2db4
Remove check for nighlies for low_cpu_fsdp and bump torch version to 2.2 instead
|
7 mesiacov pred |
Thomas Robinson
|
81642c247d
Create CodeShieldUsageDemo.ipynb
|
7 mesiacov pred |
Matthias Reso
|
cad284c66f
Replace new model url
|
7 mesiacov pred |
Matthias Reso
|
8b0a233c1a
Use new chat format in custom dataset
|
7 mesiacov pred |
Matthias Reso
|
83fae41195
Add test for chat completion formatting
|
7 mesiacov pred |
Matthias Reso
|
6d9d48d619
Use apply_chat_template instead of custom functions
|
7 mesiacov pred |
Matthias Reso
|
5efea160a2
Adapt test_finetuning to new model
|
7 mesiacov pred |
Matthias Reso
|
739483f262
Adjust test_grammar_datasets to stable sort
|
7 mesiacov pred |
Matthias Reso
|
b96e435cda
Adjust test_samsum_dataset to second model
|
7 mesiacov pred |
Matthias Reso
|
fac41298b0
Adapt test_custom_dataset to new model
|
7 mesiacov pred |
Matthias Reso
|
960014a3bb
Fix test_custom_dataset by introducing a stable sort algorithm
|
7 mesiacov pred |
Matthias Reso
|
b5583b31d5
Adapt test_grammar_dataset to new model
|
7 mesiacov pred |
Matthias Reso
|
17a6d16289
Test batching for both llama versions
|
7 mesiacov pred |
Matthias Reso
|
a414ca6a57
Update chat format for llama3
|
7 mesiacov pred |
Matthias Reso
|
113ea18bf1
Replace LlamaTokenizer with AutoTokenizer
|
7 mesiacov pred |
Beto
|
5979dbe996
Merging local with remote
|
7 mesiacov pred |
Beto
|
d4cbfa1cc1
Merging upstream llama-recipes to current repo
|
7 mesiacov pred |
Hamid Shojanazeri
|
aaa9e2c863
Adding a feature that will stop the training/eval process after reaching some max_steps (#428)
|
7 mesiacov pred |
Kai Wu
|
e6f69f84ad
add max_steps_reached to reduce redundancy
|
7 mesiacov pred |
Kai Wu
|
362cda0fa6
fixing test_gradient_accumulation and test_save_to_json
|
7 mesiacov pred |
Kai Wu
|
fa0a389f74
add max_step feature for training and eval
|
7 mesiacov pred |
Hamid Shojanazeri
|
37c8f72211
Update location and name of llm.py example notebook (#417)
|
7 mesiacov pred |
Thomas Robinson
|
79266217ef
Update location and name of llm.py example notebook
|
7 mesiacov pred |
Hamid Shojanazeri
|
f7aa02af9f
only save training params on rank 0 (#415)
|
7 mesiacov pred |
jpgard
|
6954b16b3b
only save training params on rank 0
|
8 mesiacov pred |
varunfb
|
a404c9249c
Notebook to demonstrate using llama and llama-guard together using OctoAI
|
8 mesiacov pred |
Beto
|
18d76ed36f
merging into private llama recipes repo
|
8 mesiacov pred |