Jeff Tang 674b37ee66 Updating the AWS Prompt_Engineering Notebook + Adding an Example of ReAct with Llama 2 on Bedrock (#386) 11 bulan lalu
..
chat_completion a73108704c Fix load_model missing argument 1 tahun lalu
code_llama 6853267317 update with the HF flash attention native 1 tahun lalu
examples_with_aws b61150d974 Updates to resolve reviewer comments 11 bulan lalu
hf_llama_conversion 4913d3ad24 Add missing copyright header 1 tahun lalu
hf_text_generation_inference 2374b73aad Remove __init__.py files from examples 1 tahun lalu
llama_guard 4913d3ad24 Add missing copyright header 1 tahun lalu
vllm 33da341af5 upstream resolve conflict 1 tahun lalu
Getting_to_know_Llama.ipynb 0190444849 minor updates 1 tahun lalu
Prompt_Engineering_with_Llama_2.ipynb 1278e2dbcf Add Prompt Engineering with Llama 2 1 tahun lalu
Purple_Llama_Anyscale.ipynb f0fdd3c4fa Fix Anyscale API token URL in Purple_Llama_Anyscale.ipynb 1 tahun lalu
Purple_Llama_OctoAI.ipynb b7d0941045 documentation update 1 tahun lalu
README.md 34399ad8e3 fix path 1 tahun lalu
custom_dataset.py 8620ab8ac2 Fix invalid labels for context in custom dataset/oasst1 1 tahun lalu
finetuning.py 7702d702cc Add missing file extension 1 tahun lalu
inference.py 83efccb22b Add gradio library for user interface in inference.py 1 tahun lalu
multi_node.slurm 360a658262 Adjusted docs to reflect move of qs nb + finetuning script into examples 1 tahun lalu
plot_metrics.py 4913d3ad24 Add missing copyright header 1 tahun lalu
quickstart.ipynb a0cd3c7c77 Added dependency to qsnb 1 tahun lalu
samsum_prompt.txt ccda6fb8ca Move inference scripts into example folder 1 tahun lalu

README.md

Examples

This folder contains finetuning and inference examples for Llama 2, Code Llama and (Purple Llama](https://ai.meta.com/llama/purple-llama/). For the full documentation on these examples please refer to docs/inference.md

Finetuning

Please refer to the main README.md for information on how to use the finetuning.py script. After installing the llama-recipes package through pip you can also invoke the finetuning in two ways:

python -m llama_recipes.finetuning <parameters>

python examples/finetuning.py <parameters>

Please see README.md for details.

Inference

So far, we have provide the following inference examples:

  1. inference script script provides support for Hugging Face accelerate, PEFT and FSDP fine tuned models. It also demonstrates safety features to protect the user from toxic or harmful content.

  2. vllm/inference.py script takes advantage of vLLM's paged attention concept for low latency.

  3. The hf_text_generation_inference folder contains information on Hugging Face Text Generation Inference (TGI).

  4. A chat completion example highlighting the handling of chat dialogs.

  5. Code Llama folder which provides examples for code completion, code infilling and Llama2 70B code instruct.

  6. The Purple Llama Using Anyscale and the Purple Llama Using OctoAI are notebooks that shows how to use Llama Guard model on Anyscale and OctoAI to classify user inputs as safe or unsafe.

  7. Llama Guard inference example and safety_checker for the main inference script. The standalone scripts allows to test Llama Guard on user input, or user input and agent response pairs. The safety_checker integration providers a way to integrate Llama Guard on all inference executions, both for the user input and model output.

For more in depth information on inference including inference safety checks and examples, see the inference documentation here.

Note The sensitive topics safety checker utilizes AuditNLG which is an optional dependency. Please refer to installation section of the main README.md for details.

Note The vLLM example requires additional dependencies. Please refer to installation section of the main README.md for details.

Train on custom dataset

To show how to train a model on a custom dataset we provide an example to generate a custom dataset in custom_dataset.py. The usage of the custom dataset is further described in the datasets README.