|
před 1 rokem | |
---|---|---|
.. | ||
chat_completion | před 1 rokem | |
code_llama | před 1 rokem | |
hf_text_generation_inference | před 1 rokem | |
vllm | před 1 rokem | |
README.md | před 1 rokem | |
inference.py | před 1 rokem | |
samsum_prompt.txt | před 1 rokem |
This folder contains inference examples for Llama 2. So far, we have provided support for three methods of inference:
inference script script provides support for Hugging Face accelerate, PEFT and FSDP fine tuned models.
vllm/inference.py script takes advantage of vLLM's paged attention concept for low latency.
The hf_text_generation_inference folder contains information on Hugging Face Text Generation Inference (TGI).
For more in depth information on inference including inference safety checks and examples, see the inference documentation here.