|
@@ -1,7 +1,6 @@
|
|
|
# Examples
|
|
|
|
|
|
-This folder contains finetuning and inference examples for Llama 2.
|
|
|
-For the full documentation on these examples please refer to [docs/inference.md](../docs/inference.md)
|
|
|
+This folder contains finetuning and inference examples for Llama 2, Code Llama and (Purple Llama](https://ai.meta.com/llama/purple-llama/). For the full documentation on these examples please refer to [docs/inference.md](../docs/inference.md)
|
|
|
|
|
|
## Finetuning
|
|
|
|
|
@@ -27,6 +26,8 @@ So far, we have provide the following inference examples:
|
|
|
|
|
|
5. [Code Llama](./code_llama/) folder which provides examples for [code completion](./code_llama/code_completion_example.py) and [code infilling](./code_llama/code_infilling_example.py).
|
|
|
|
|
|
+6. The [Purple Llama Using Anyscale](./Purple_Llama_Anyscale.ipynb) is a notebook that shows how to use Anyscale hosted Llama Guard model to classify user inputs as safe or unsafe.
|
|
|
+
|
|
|
For more in depth information on inference including inference safety checks and examples, see the inference documentation [here](../docs/inference.md).
|
|
|
|
|
|
**Note** The [sensitive topics safety checker](../src/llama_recipes/inference/safety_utils.py) utilizes AuditNLG which is an optional dependency. Please refer to installation section of the main [README.md](../README.md#install-with-optional-dependencies) for details.
|