This folder contains finetuning and inference examples for Llama 2.
Please refer to the main README.md for information on how to use the finetuning.py script. After installing the llama-recipes package through pip you can also invoke the finetuning in two ways:
python -m llama_recipes.finetuning <parameters>
python examnples/finetuning.py <parameters>
Please see README.md for details.
So far, we have provided support for three methods of inference:
inference script script provides support for Hugging Face accelerate, PEFT and FSDP fine tuned models.
vllm/inference.py script takes advantage of vLLM's paged attention concept for low latency.
The hf_text_generation_inference folder contains information on Hugging Face Text Generation Inference (TGI).
For more in depth information on inference including inference safety checks and examples, see the inference documentation here.
Note The vLLM example requires additional dependencies. Please refer to installation section of the main README.md for details