Matthias Reso 2374b73aad Remove __init__.py files from examples vor 1 Jahr
..
chat_completion 3ddf755539 Move examples into subfolders vor 1 Jahr
code_llama 2374b73aad Remove __init__.py files from examples vor 1 Jahr
hf_text_generation_inference 2374b73aad Remove __init__.py files from examples vor 1 Jahr
vllm 3ddf755539 Move examples into subfolders vor 1 Jahr
README.md 360a658262 Adjusted docs to reflect move of qs nb + finetuning script into examples vor 1 Jahr
finetuning.py 7702d702cc Add missing file extension vor 1 Jahr
inference.py ccda6fb8ca Move inference scripts into example folder vor 1 Jahr
multi_node.slurm 360a658262 Adjusted docs to reflect move of qs nb + finetuning script into examples vor 1 Jahr
quickstart.ipynb 682943551f Move quickstart nb, finetuning script and slurm config to examples vor 1 Jahr
samsum_prompt.txt ccda6fb8ca Move inference scripts into example folder vor 1 Jahr

README.md

Examples

This folder contains finetuning and inference examples for Llama 2.

Finetuning

Please refer to the main README.md for information on how to use the finetuning.py script. After installing the llama-recipes package through pip you can also invoke the finetuning in two ways:

python -m llama_recipes.finetuning <parameters>

python examnples/finetuning.py <parameters>

Please see README.md for details.

Inference

So far, we have provided support for three methods of inference:

  1. inference script script provides support for Hugging Face accelerate, PEFT and FSDP fine tuned models.

  2. vllm/inference.py script takes advantage of vLLM's paged attention concept for low latency.

  3. The hf_text_generation_inference folder contains information on Hugging Face Text Generation Inference (TGI).

For more in depth information on inference including inference safety checks and examples, see the inference documentation here.

Note The vLLM example requires additional dependencies. Please refer to installation section of the main README.md for details