Hamid Shojanazeri
|
85c66acf21
adding mmlu to leaderboard confgs
|
10 months ago |
Hamid Shojanazeri
|
3012a230bf
clean up
|
10 months ago |
Hamid Shojanazeri
|
2c2fcd14e4
adding eval harness pipeline
|
10 months ago |
Jeff Tang
|
98b122e57a
Fix broken format in preview for RAG chatbot example (#355)
|
10 months ago |
Chester Hu
|
9a06875dec
Update RAG_Chatbot_Example.ipynb
|
10 months ago |
Chester Hu
|
b07f349a2c
Fix broken formatting in the preview.
|
10 months ago |
Joone Hur
|
ed3e11e9a8
Add option to enable Llamaguard content safety check in chat_completion
|
10 months ago |
Geeta Chauhan
|
1896fd8261
Fix test_finetuning for env without cuda (#327)
|
10 months ago |
Less Wright
|
3f2c33e4f8
Update finetuning.py - remove nightly check
|
10 months ago |
Jeff Tang
|
9e548ce6f1
Add Prompt Engineering with Llama 2 (#353)
|
10 months ago |
Dalton Flanagan
|
1278e2dbcf
Add Prompt Engineering with Llama 2
|
10 months ago |
Chester Hu
|
689e57bb50
Add inference throughput benchmark on-prem vllm (#331)
|
10 months ago |
Chester Hu
|
ff323f49c0
Update delay simulation comment
|
10 months ago |
Jeff Tang
|
9c039cd122
fix of dead link in demo apps readme (#350)
|
10 months ago |
Jeff Tang
|
16aeddac41
fix of dead link
|
10 months ago |
Hamid Shojanazeri
|
b15ffeeaf4
clean up
|
10 months ago |
Hamid Shojanazeri
|
8bf474b455
clean up
|
10 months ago |
Hamid Shojanazeri
|
19089269d3
add gc
|
10 months ago |
Hamid Shojanazeri
|
dbfea484c6
Feature : Enable Intel GPU/XPU finetuning and inference (#116)
|
10 months ago |
Danielle Pintz
|
fdc4c64d0b
Update Installation section of README.md
|
11 months ago |
Jeff Tang
|
b0646bfd9c
typo fix in Purple_Llama_Anyscale.ipynb (#346)
|
10 months ago |
Jeff Tang
|
cc87011701
typo fix in Purple_Llama_Anyscale.ipynb
|
10 months ago |
Chester Hu
|
fce0485634
fix type and rename folder names
|
10 months ago |
Chester Hu
|
e80c2588a6
Address comments
|
10 months ago |
Chester Hu
|
bd3eb3ad95
Update README.md
|
10 months ago |
Chester Hu
|
e6a28c7162
Merge branch 'benchmark-infernece-throughput-onperm-vllm' of https://github.com/facebookresearch/llama-recipes into benchmark-infernece-throughput-onperm-vllm
|
10 months ago |
Chester Hu
|
2b0fc14081
Update README.md
|
10 months ago |
Chester Hu
|
87f6119369
Merge branch 'main' into benchmark-infernece-throughput-onperm-vllm
|
10 months ago |
Chester Hu
|
9f84f73420
Address comments
|
10 months ago |
albertodepaola
|
aaa769c91b
Llama guard data formatter example (#337)
|
10 months ago |