Browse Source

Adding open in colab option for notebook (#395)

Hamid Shojanazeri 11 months ago
parent
commit
c8f4bdac41

+ 2 - 2
README.md

@@ -62,14 +62,14 @@ Optional dependencies can also be combines with [option1,option2].
 #### Install from source
 To install from source e.g. for development use these commands. We're using hatchling as our build backend which requires an up-to-date pip as well as setuptools package.
 ```
-git clone git@github.com:facebookresearch/llama-recipes.git
+git clone git@github.com:meta-llama/llama-recipes.git
 cd llama-recipes
 pip install -U pip setuptools
 pip install --extra-index-url https://download.pytorch.org/whl/test/cu118 -e .
 ```
 For development and contributing to llama-recipes please install all optional dependencies:
 ```
-git clone git@github.com:facebookresearch/llama-recipes.git
+git clone git@github.com:meta-llama/llama-recipes.git
 cd llama-recipes
 pip install -U pip setuptools
 pip install --extra-index-url https://download.pytorch.org/whl/test/cu118 -e .[tests,auditnlg,vllm]

+ 2 - 2
UPDATES.md

@@ -14,6 +14,6 @@ The PyTorch scripts currently provided for tokenization and model inference allo
 As noted in the documentation, these strings are required to use the fine-tuned chat models. However, prompt injections have also been used for manipulating or abusing models by bypassing their safeguards, allowing for the creation of content or behaviors otherwise outside the bounds of acceptable use. 
 
 ### Updated approach
-We recommend sanitizing [these strings](https://github.com/facebookresearch/llama#fine-tuned-chat-models) from any user provided prompts. Sanitization of user prompts mitigates malicious or accidental abuse of these strings. The provided scripts have been updated to do this. 
+We recommend sanitizing [these strings](https://github.com/meta-llama/llama?tab=readme-ov-file#fine-tuned-chat-models) from any user provided prompts. Sanitization of user prompts mitigates malicious or accidental abuse of these strings. The provided scripts have been updated to do this. 
 
-Note: even with this update safety classifiers should still be applied to catch unsafe behaviors or content produced by the model. An [example](https://github.com/facebookresearch/llama-recipes/blob/main/examples/inference.py) of how to deploy such a classifier can be found in the llama-recipes repository.
+Note: even with this update safety classifiers should still be applied to catch unsafe behaviors or content produced by the model. An [example](https://github.com/meta-llama/llama-recipes/blob/main/recipes/inference/local_inference/inference.py) of how to deploy such a classifier can be found in the llama-recipes repository.

+ 1 - 1
recipes/benchmarks/inference/on-prem/README.md

@@ -6,7 +6,7 @@ We support benchmark on these serving framework:
 
 
 # vLLM - Getting Started
-To get started, we first need to deploy containers on-prem as a API host. Follow the guidance [here](https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/llama-on-prem.md#setting-up-vllm-with-llama-2) to deploy vLLM on-prem.
+To get started, we first need to deploy containers on-prem as a API host. Follow the guidance [here](https://github.com/meta-llama/llama-recipes/blob/main/recipes/inference/model_servers/llama-on-prem.md#setting-up-vllm-with-llama-2) to deploy vLLM on-prem.
 Note that in common scenario which overall throughput is important, we suggest you prioritize deploying as many model replicas as possible to reach higher overall throughput and request-per-second (RPS), comparing to deploy one model container among multiple GPUs for model parallelism. Additionally, as deploying multiple model replicas, there is a need for a higher level wrapper to handle the load balancing which here has been simulated in the benchmark scripts.  
 For example, we have an instance from Azure that has 8xA100 80G GPUs, and we want to deploy the Llama 2 70B chat model, which is around 140GB with FP16. So for deployment we can do:
 * 1x70B model parallel on 8 GPUs, each GPU RAM takes around 17.5GB for loading model weights.

File diff suppressed because it is too large
+ 3 - 3
recipes/inference/model_servers/llama-on-prem.md


+ 2 - 1
recipes/quickstart/Running_Llama2_Anywhere/Running_Llama_on_HF_transformers.ipynb

@@ -5,7 +5,8 @@
    "metadata": {},
    "source": [
     "## Running Llama2 on Google Colab using Hugging Face transformers library\n",
-    "This notebook goes over how you can set up and run Llama2 using Hugging Face transformers library"
+    "This notebook goes over how you can set up and run Llama2 using Hugging Face transformers library\n",
+    "<a href=\"https://colab.research.google.com/github/meta-llama/llama-recipes/blob/main/recipes/quickstart/Running_Llama2_Anywhere/Running_Llama_on_HF_transformers.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
    ]
   },
   {

+ 44 - 44
recipes/responsible_ai/Purple_Llama_Anyscale.ipynb

@@ -3,8 +3,8 @@
     {
       "cell_type": "markdown",
       "metadata": {
-        "id": "view-in-github",
-        "colab_type": "text"
+        "colab_type": "text",
+        "id": "view-in-github"
       },
       "source": [
         "<a href=\"https://colab.research.google.com/github/amitsangani/Llama-2/blob/main/Purple_Llama_Anyscale.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
@@ -97,10 +97,10 @@
       "cell_type": "code",
       "execution_count": null,
       "metadata": {
-        "id": "yE3sPjS-cyd2",
         "colab": {
           "base_uri": "https://localhost:8080/"
         },
+        "id": "yE3sPjS-cyd2",
         "outputId": "93b36bc0-e6d4-493c-c88d-ec5c41266239"
       },
       "outputs": [
@@ -125,6 +125,11 @@
     },
     {
       "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "DOSiDW6hq9dI"
+      },
+      "outputs": [],
       "source": [
         "from string import Template\n",
         "\n",
@@ -195,18 +200,11 @@
         "    prompt = PROMPT_TEMPLATE.substitute(prompt=message, agent_type=role)\n",
         "    prompt = f\"<s>{B_INST} {prompt.strip()} {E_INST}\"\n",
         "    return prompt\n"
-      ],
-      "metadata": {
-        "id": "DOSiDW6hq9dI"
-      },
-      "execution_count": null,
-      "outputs": []
+      ]
     },
     {
       "cell_type": "code",
-      "source": [
-        "%pip install openai"
-      ],
+      "execution_count": null,
       "metadata": {
         "colab": {
           "base_uri": "https://localhost:8080/"
@@ -214,11 +212,10 @@
         "id": "t6hkFlVD9XFw",
         "outputId": "25fd187e-a484-4b90-d104-a3320b98e8ea"
       },
-      "execution_count": null,
       "outputs": [
         {
-          "output_type": "stream",
           "name": "stdout",
+          "output_type": "stream",
           "text": [
             "Collecting openai\n",
             "  Downloading openai-1.3.7-py3-none-any.whl (221 kB)\n",
@@ -248,26 +245,14 @@
             "\u001b[0mSuccessfully installed h11-0.14.0 httpcore-1.0.2 httpx-0.25.2 openai-1.3.7\n"
           ]
         }
+      ],
+      "source": [
+        "%pip install openai"
       ]
     },
     {
       "cell_type": "code",
-      "source": [
-        "import openai\n",
-        "\n",
-        "system_content = \"You will be provided with a product description and seed words. Your task is to generate potential product names.\"\n",
-        "user_content = \"Product description: A home milkshake maker. Seed words: fast, healthy, compact.\"\n",
-        "\n",
-        "client = openai.OpenAI(\n",
-        "           base_url = \"https://api.endpoints.anyscale.com/v1\",\n",
-        "           api_key=ANYSCALE_API_KEY\n",
-        "        )\n",
-        "completion = client.completions.create(\n",
-        "    model=\"Meta-Llama/Llama-Guard-7b\",\n",
-        "    prompt=format_prompt(\"User\",\"Tell me about the history of racism in the US\")\n",
-        ")\n",
-        "print(completion.choices[0].text)"
-      ],
+      "execution_count": null,
       "metadata": {
         "colab": {
           "base_uri": "https://localhost:8080/"
@@ -275,28 +260,35 @@
         "id": "xC-OYLF4rGc7",
         "outputId": "452ca85a-fd9f-426c-d4ab-7898bdabfab8"
       },
-      "execution_count": null,
       "outputs": [
         {
-          "output_type": "stream",
           "name": "stdout",
+          "output_type": "stream",
           "text": [
             " safe\n"
           ]
         }
-      ]
-    },
-    {
-      "cell_type": "code",
+      ],
       "source": [
         "import openai\n",
         "\n",
+        "system_content = \"You will be provided with a product description and seed words. Your task is to generate potential product names.\"\n",
+        "user_content = \"Product description: A home milkshake maker. Seed words: fast, healthy, compact.\"\n",
+        "\n",
+        "client = openai.OpenAI(\n",
+        "           base_url = \"https://api.endpoints.anyscale.com/v1\",\n",
+        "           api_key=ANYSCALE_API_KEY\n",
+        "        )\n",
         "completion = client.completions.create(\n",
         "    model=\"Meta-Llama/Llama-Guard-7b\",\n",
-        "    prompt=format_prompt(\"Agent\",\"User: How do you buy a tiger in the US\\n Agent: Steal one from the zoo\")\n",
+        "    prompt=format_prompt(\"User\",\"Tell me about the history of racism in the US\")\n",
         ")\n",
         "print(completion.choices[0].text)"
-      ],
+      ]
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
       "metadata": {
         "colab": {
           "base_uri": "https://localhost:8080/"
@@ -304,16 +296,24 @@
         "id": "JtBTkJRxufb0",
         "outputId": "c581052d-348e-45ac-c874-80ecf4416a77"
       },
-      "execution_count": null,
       "outputs": [
         {
-          "output_type": "stream",
           "name": "stdout",
+          "output_type": "stream",
           "text": [
             " unsafe\n",
             "O3\n"
           ]
         }
+      ],
+      "source": [
+        "import openai\n",
+        "\n",
+        "completion = client.completions.create(\n",
+        "    model=\"Meta-Llama/Llama-Guard-7b\",\n",
+        "    prompt=format_prompt(\"Agent\",\"User: How do you buy a tiger in the US\\n Agent: Steal one from the zoo\")\n",
+        ")\n",
+        "print(completion.choices[0].text)"
       ]
     },
     {
@@ -326,7 +326,7 @@
         "- [Llama 2](https://ai.meta.com/llama/)\n",
         "- [Getting Started Guide - Llama 2](https://ai.meta.com/llama/get-started/)\n",
         "- [GitHub - Llama 2](https://github.com/facebookresearch/llama)\n",
-        "- [Github - LLama 2 Recipes](https://github.com/facebookresearch/llama-recipes) and [Llama 2 Demo Apps](https://github.com/facebookresearch/llama-recipes/tree/main/demo_apps)\n",
+        "- [Github - LLama 2 Recipes](https://github.com/facebookresearch/llama-recipes) and [Llama 2 Demo Apps](https://github.com/meta-llama/llama-recipes/tree/main/recipes)\n",
         "- [Research Paper](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/)\n",
         "- [Model Card](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md)\n",
         "- [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/)\n",
@@ -357,10 +357,10 @@
   ],
   "metadata": {
     "colab": {
-      "provenance": [],
-      "toc_visible": true,
       "gpuType": "T4",
-      "include_colab_link": true
+      "include_colab_link": true,
+      "provenance": [],
+      "toc_visible": true
     },
     "kernelspec": {
       "display_name": "Python 3",

File diff suppressed because it is too large
+ 2 - 2
recipes/use_cases/chatbots/messenger_llama/messenger_llama2.md


File diff suppressed because it is too large
+ 2 - 2
recipes/use_cases/chatbots/whatsapp_llama/whatsapp_llama2.md