Hamid Shojanazeri
|
554396d4ec
bumping transformer versions for llama3 support
|
7 maanden geleden |
Hamid Shojanazeri
|
d717be8ad4
Merge pull request #3 from albertodepaola/l3p/finetuning_inference_chat_mods
|
7 maanden geleden |
Matthias Reso
|
43cb6a2db4
Remove check for nighlies for low_cpu_fsdp and bump torch version to 2.2 instead
|
7 maanden geleden |
varunfb
|
a404c9249c
Notebook to demonstrate using llama and llama-guard together using OctoAI
|
8 maanden geleden |
Joone Hur
|
aec45aed81
Add gradio to requirements.txt
|
9 maanden geleden |
Beto
|
7474514fe0
Merging with main
|
11 maanden geleden |
Beto
|
7881b3bb99
Changing safety utils to use HF classes to load Llama Guard. Removing Llama plain inference code
|
11 maanden geleden |
Beto
|
92be45b0fe
Adding matplotlib to requirements. Removing import from train_utils
|
1 jaar geleden |
Matthias Reso
|
1c473b6e7c
remove --find-links which is unsupported by packaging backends; Update documentation how to retireve correct pytorch version
|
1 jaar geleden |
Matthias Reso
|
bf152a7dcb
Upgrade torch requirement to 2.1 RC
|
1 jaar geleden |
Matthias Reso
|
5b6858949d
remove version pinning from bitsandbytes
|
1 jaar geleden |
Matthias Reso
|
31fabb254a
Make vllm optional
|
1 jaar geleden |
Matthias Reso
|
2717048197
Add vllm and pytest as dependencies
|
1 jaar geleden |
Matthias Reso
|
02428c992a
Adding vllm as dependency; fix dep install with hatchling
|
1 jaar geleden |
Matthias Reso
|
c8522eb0ff
Remove peft install from src
|
1 jaar geleden |
Hamid Shojanazeri
|
44ef280d31
adding flash attention and xformer memory efficient through PT SDPA
|
1 jaar geleden |
Hamid Shojanazeri
|
954f6e741c
update transformers version requirement
|
1 jaar geleden |
chauhang
|
4767f09ecd
Initial commit
|
1 jaar geleden |