Hamid Shojanazeri
|
554396d4ec
bumping transformer versions for llama3 support
|
7 달 전 |
Hamid Shojanazeri
|
d717be8ad4
Merge pull request #3 from albertodepaola/l3p/finetuning_inference_chat_mods
|
7 달 전 |
Matthias Reso
|
43cb6a2db4
Remove check for nighlies for low_cpu_fsdp and bump torch version to 2.2 instead
|
7 달 전 |
varunfb
|
a404c9249c
Notebook to demonstrate using llama and llama-guard together using OctoAI
|
8 달 전 |
Joone Hur
|
aec45aed81
Add gradio to requirements.txt
|
8 달 전 |
Beto
|
7474514fe0
Merging with main
|
11 달 전 |
Beto
|
7881b3bb99
Changing safety utils to use HF classes to load Llama Guard. Removing Llama plain inference code
|
11 달 전 |
Beto
|
92be45b0fe
Adding matplotlib to requirements. Removing import from train_utils
|
1 년 전 |
Matthias Reso
|
1c473b6e7c
remove --find-links which is unsupported by packaging backends; Update documentation how to retireve correct pytorch version
|
1 년 전 |
Matthias Reso
|
bf152a7dcb
Upgrade torch requirement to 2.1 RC
|
1 년 전 |
Matthias Reso
|
5b6858949d
remove version pinning from bitsandbytes
|
1 년 전 |
Matthias Reso
|
31fabb254a
Make vllm optional
|
1 년 전 |
Matthias Reso
|
2717048197
Add vllm and pytest as dependencies
|
1 년 전 |
Matthias Reso
|
02428c992a
Adding vllm as dependency; fix dep install with hatchling
|
1 년 전 |
Matthias Reso
|
c8522eb0ff
Remove peft install from src
|
1 년 전 |
Hamid Shojanazeri
|
44ef280d31
adding flash attention and xformer memory efficient through PT SDPA
|
1 년 전 |
Hamid Shojanazeri
|
954f6e741c
update transformers version requirement
|
1 년 전 |
chauhang
|
4767f09ecd
Initial commit
|
1 년 전 |