|
1 anno fa | |
---|---|---|
.. | ||
chat_completion | 1 anno fa | |
code_llama | 1 anno fa | |
hf_text_generation_inference | 1 anno fa | |
vllm | 1 anno fa | |
README.md | 1 anno fa | |
inference.py | 1 anno fa | |
samsum_prompt.txt | 1 anno fa |
This folder contains inference examples for Llama 2. So far, we have provided support for three methods of inference:
inference script script provides support for Hugging Face accelerate, PEFT and FSDP fine tuned models.
vllm/inference.py script takes advantage of vLLM's paged attention concept for low latency.
The hf_text_generation_inference folder contains information on Hugging Face Text Generation Inference (TGI).
For more in depth information on inference including inference safety checks and examples, see the inference documentation here.
Note The vLLM example requires additional dependencies. Please refer to installation section of the main README.md for details