Suraj Subramanian 12602f32e2 Merge branch 'main' into subramen-patch-deadlinks | 8 ay önce | |
---|---|---|
.. | ||
hf_text_generation_inference | 8 ay önce | |
vllm | 8 ay önce | |
README.md | 8 ay önce | |
llama-on-prem.md | 8 ay önce |
This tutorial shows how to use Llama 2 with vLLM and Hugging Face TGI to build Llama 2 on-prem apps.
* To run a quantized Llama2 model on iOS and Android, you can use the open source MLC LLM or llama.cpp. You can even make a Linux OS that boots to Llama2 (repo).