Matthias Reso 207d2f80e9 Make code-llama and hf-tgi inference runnable as module 1 rok pred
..
code_llama 207d2f80e9 Make code-llama and hf-tgi inference runnable as module 1 rok pred
hf_text_generation_inference 207d2f80e9 Make code-llama and hf-tgi inference runnable as module 1 rok pred
README.md 4c9cc7d223 Move modules into separate src folder 1 rok pred
__init__.py 207d2f80e9 Make code-llama and hf-tgi inference runnable as module 1 rok pred
__main__.py 207d2f80e9 Make code-llama and hf-tgi inference runnable as module 1 rok pred
chat_completion.py 5ac5d992ac Update imports in chat_completion 1 rok pred
chat_utils.py cf678b9bf0 Adjust imports to package structure + cleaned up imports 1 rok pred
chats.json 4c9cc7d223 Move modules into separate src folder 1 rok pred
checkpoint_converter_fsdp_hf.py cf678b9bf0 Adjust imports to package structure + cleaned up imports 1 rok pred
inference.py cf678b9bf0 Adjust imports to package structure + cleaned up imports 1 rok pred
model_utils.py 4c9cc7d223 Move modules into separate src folder 1 rok pred
safety_utils.py cf678b9bf0 Adjust imports to package structure + cleaned up imports 1 rok pred
samsum_prompt.txt 4c9cc7d223 Move modules into separate src folder 1 rok pred
vLLM_inference.py cf678b9bf0 Adjust imports to package structure + cleaned up imports 1 rok pred

README.md

Inference

This folder contains inference examples for Llama 2. So far, we have provided support for three methods of inference:

  1. inference script script provides support for Hugging Face accelerate, PEFT and FSDP fine tuned models.

  2. vLLM_inference.py script takes advantage of vLLM's paged attention concept for low latency.

  3. The hf-text-generation-inference folder contains information on Hugging Face Text Generation Inference (TGI).

For more in depth information on inference including inference safety checks and examples, see the inference documentation here.