Hamid Shojanazeri
|
554396d4ec
bumping transformer versions for llama3 support
|
hai 7 meses |
Hamid Shojanazeri
|
d717be8ad4
Merge pull request #3 from albertodepaola/l3p/finetuning_inference_chat_mods
|
hai 7 meses |
Matthias Reso
|
43cb6a2db4
Remove check for nighlies for low_cpu_fsdp and bump torch version to 2.2 instead
|
hai 7 meses |
varunfb
|
a404c9249c
Notebook to demonstrate using llama and llama-guard together using OctoAI
|
hai 8 meses |
Joone Hur
|
aec45aed81
Add gradio to requirements.txt
|
hai 8 meses |
Beto
|
7474514fe0
Merging with main
|
hai 11 meses |
Beto
|
7881b3bb99
Changing safety utils to use HF classes to load Llama Guard. Removing Llama plain inference code
|
hai 11 meses |
Beto
|
92be45b0fe
Adding matplotlib to requirements. Removing import from train_utils
|
hai 1 ano |
Matthias Reso
|
1c473b6e7c
remove --find-links which is unsupported by packaging backends; Update documentation how to retireve correct pytorch version
|
hai 1 ano |
Matthias Reso
|
bf152a7dcb
Upgrade torch requirement to 2.1 RC
|
hai 1 ano |
Matthias Reso
|
5b6858949d
remove version pinning from bitsandbytes
|
hai 1 ano |
Matthias Reso
|
31fabb254a
Make vllm optional
|
hai 1 ano |
Matthias Reso
|
2717048197
Add vllm and pytest as dependencies
|
hai 1 ano |
Matthias Reso
|
02428c992a
Adding vllm as dependency; fix dep install with hatchling
|
hai 1 ano |
Matthias Reso
|
c8522eb0ff
Remove peft install from src
|
hai 1 ano |
Hamid Shojanazeri
|
44ef280d31
adding flash attention and xformer memory efficient through PT SDPA
|
hai 1 ano |
Hamid Shojanazeri
|
954f6e741c
update transformers version requirement
|
hai 1 ano |
chauhang
|
4767f09ecd
Initial commit
|
hai 1 ano |