Forráskód Böngészése

Merge branch 'main' into subramen-patch-deadlinks

Suraj Subramanian 11 hónapja
szülő
commit
12602f32e2

+ 2 - 2
README.md

@@ -62,14 +62,14 @@ Optional dependencies can also be combines with [option1,option2].
 #### Install from source
 To install from source e.g. for development use these commands. We're using hatchling as our build backend which requires an up-to-date pip as well as setuptools package.
 ```
-git clone git@github.com:facebookresearch/llama-recipes.git
+git clone git@github.com:meta-llama/llama-recipes.git
 cd llama-recipes
 pip install -U pip setuptools
 pip install --extra-index-url https://download.pytorch.org/whl/test/cu118 -e .
 ```
 For development and contributing to llama-recipes please install all optional dependencies:
 ```
-git clone git@github.com:facebookresearch/llama-recipes.git
+git clone git@github.com:meta-llama/llama-recipes.git
 cd llama-recipes
 pip install -U pip setuptools
 pip install --extra-index-url https://download.pytorch.org/whl/test/cu118 -e .[tests,auditnlg,vllm]

+ 2 - 2
UPDATES.md

@@ -14,6 +14,6 @@ The PyTorch scripts currently provided for tokenization and model inference allo
 As noted in the documentation, these strings are required to use the fine-tuned chat models. However, prompt injections have also been used for manipulating or abusing models by bypassing their safeguards, allowing for the creation of content or behaviors otherwise outside the bounds of acceptable use. 
 
 ### Updated approach
-We recommend sanitizing [these strings](https://github.com/facebookresearch/llama#fine-tuned-chat-models) from any user provided prompts. Sanitization of user prompts mitigates malicious or accidental abuse of these strings. The provided scripts have been updated to do this. 
+We recommend sanitizing [these strings](https://github.com/meta-llama/llama?tab=readme-ov-file#fine-tuned-chat-models) from any user provided prompts. Sanitization of user prompts mitigates malicious or accidental abuse of these strings. The provided scripts have been updated to do this. 
 
-Note: even with this update safety classifiers should still be applied to catch unsafe behaviors or content produced by the model. An [example](./recipes/inference/local_inference/inference.py) of how to deploy such a classifier can be found in the llama-recipes repository.
+Note: even with this update safety classifiers should still be applied to catch unsafe behaviors or content produced by the model. An [example](./recipes/inference/local_inference/inference.py) of how to deploy such a classifier can be found in the llama-recipes repository.

+ 2 - 0
recipes/benchmarks/inference/on-prem/README.md

@@ -6,7 +6,9 @@ We support benchmark on these serving framework:
 
 
 # vLLM - Getting Started
+
 To get started, we first need to deploy containers on-prem as a API host. Follow the guidance [here](../../../inference/model_servers/llama-on-prem.md#setting-up-vllm-with-llama-2) to deploy vLLM on-prem.
+
 Note that in common scenario which overall throughput is important, we suggest you prioritize deploying as many model replicas as possible to reach higher overall throughput and request-per-second (RPS), comparing to deploy one model container among multiple GPUs for model parallelism. Additionally, as deploying multiple model replicas, there is a need for a higher level wrapper to handle the load balancing which here has been simulated in the benchmark scripts.  
 For example, we have an instance from Azure that has 8xA100 80G GPUs, and we want to deploy the Llama 2 70B chat model, which is around 140GB with FP16. So for deployment we can do:
 * 1x70B model parallel on 8 GPUs, each GPU RAM takes around 17.5GB for loading model weights.

A különbségek nem kerülnek megjelenítésre, a fájl túl nagy
+ 3 - 0
recipes/inference/model_servers/llama-on-prem.md


+ 2 - 1
recipes/quickstart/Running_Llama2_Anywhere/Running_Llama_on_HF_transformers.ipynb

@@ -5,7 +5,8 @@
    "metadata": {},
    "source": [
     "## Running Llama2 on Google Colab using Hugging Face transformers library\n",
-    "This notebook goes over how you can set up and run Llama2 using Hugging Face transformers library"
+    "This notebook goes over how you can set up and run Llama2 using Hugging Face transformers library\n",
+    "<a href=\"https://colab.research.google.com/github/meta-llama/llama-recipes/blob/main/recipes/quickstart/Running_Llama2_Anywhere/Running_Llama_on_HF_transformers.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
    ]
   },
   {