Beto c1be7d802a Updating responsible AI main readme | 7 mēneši atpakaļ | |
---|---|---|
.. | ||
llama_guard | 7 mēneši atpakaļ | |
CodeShieldUsageDemo.ipynb | 7 mēneši atpakaļ | |
Purple_Llama_Anyscale.ipynb | 8 mēneši atpakaļ | |
Purple_Llama_OctoAI.ipynb | 8 mēneši atpakaļ | |
README.md | 7 mēneši atpakaļ | |
input_output_guardrails_with_llama.ipynb | 8 mēneši atpakaļ |
Meta Llama Guard and Meta Llama Guard 2 are new models that provide input and output guardrails for LLM inference. For more details, please visit the main repository.
Note Please find the right model on HF side here.
The llama_guard folder contains the inference script to run Meta Llama Guard locally. Add test prompts directly to the inference script before running it.
The notebooks Purple_Llama_Anyscale & Purple_Llama_OctoAI contain examples for running Meta Llama Guard on cloud hosted endpoints.