Browse Source

simplifying the model names

Thierry Moreau 1 year ago
parent
commit
5e777e137a

+ 6 - 6
demo_apps/OctoAI_API_examples/HelloLlamaCloud.ipynb

@@ -60,12 +60,12 @@
     "Next we call the Llama 2 model from OctoAI. In this example we will use the Llama 2 13b chat FP16 model. You can find more on Llama 2 models on the [OctoAI text generation solution page](https://octoai.cloud/tools/text).\n",
     "\n",
     "At the time of writing this notebook the following Llama models are available on OctoAI:\n",
-    "* llama-2-13b-chat-fp16\n",
-    "* llama-2-70b-chat-fp16\n",
-    "* codellama-7b-instruct-fp16\n",
-    "* codellama-13b-instruct-fp16\n",
-    "* codellama-34b-instruct-fp16\n",
-    "* codellama-70b-instruct-fp16"
+    "* llama-2-13b-chat\n",
+    "* llama-2-70b-chat\n",
+    "* codellama-7b-instruct\n",
+    "* codellama-13b-instruct\n",
+    "* codellama-34b-instruct\n",
+    "* codellama-70b-instruct"
    ]
   },
   {

+ 6 - 6
demo_apps/OctoAI_API_examples/LiveData.ipynb

@@ -103,12 +103,12 @@
     "We will use the Llama 2 13b chat FP16 model. You can find more on Llama 2 models on the [OctoAI text generation solution page](https://octoai.cloud/tools/text).\n",
     "\n",
     "At the time of writing this notebook the following Llama models are available on OctoAI:\n",
-    "* llama-2-13b-chat-fp16\n",
-    "* llama-2-70b-chat-fp16\n",
-    "* codellama-7b-instruct-fp16\n",
-    "* codellama-13b-instruct-fp16\n",
-    "* codellama-34b-instruct-fp16\n",
-    "* codellama-70b-instruct-fp16"
+    "* llama-2-13b-chat\n",
+    "* llama-2-70b-chat\n",
+    "* codellama-7b-instruct\n",
+    "* codellama-13b-instruct\n",
+    "* codellama-34b-instruct\n",
+    "* codellama-70b-instruct"
    ]
   },
   {

+ 6 - 6
demo_apps/OctoAI_API_examples/RAG_Chatbot_example/RAG_Chatbot_Example.ipynb

@@ -313,12 +313,12 @@
     "Next we call the Llama 2 model from OctoAI. In this example we will use the Llama 2 13b chat FP16 model. You can find more on Llama 2 models on the [OctoAI text generation solution page](https://octoai.cloud/tools/text).\n",
     "\n",
     "At the time of writing this notebook the following Llama models are available on OctoAI:\n",
-    "* llama-2-13b-chat-fp16\n",
-    "* llama-2-70b-chat-fp16\n",
-    "* codellama-7b-instruct-fp16\n",
-    "* codellama-13b-instruct-fp16\n",
-    "* codellama-34b-instruct-fp16\n",
-    "* codellama-70b-instruct-fp16"
+    "* llama-2-13b-chat\n",
+    "* llama-2-70b-chat\n",
+    "* codellama-7b-instruct\n",
+    "* codellama-13b-instruct\n",
+    "* codellama-34b-instruct\n",
+    "* codellama-70b-instruct"
    ]
   },
   {

+ 6 - 6
demo_apps/OctoAI_API_examples/VideoSummary.ipynb

@@ -121,12 +121,12 @@
     "Next we call the Llama 2 model from OctoAI. In this example we will use the Llama 2 13b chat FP16 model. You can find more on Llama 2 models on the [OctoAI text generation solution page](https://octoai.cloud/tools/text).\n",
     "\n",
     "At the time of writing this notebook the following Llama models are available on OctoAI:\n",
-    "* llama-2-13b-chat-fp16\n",
-    "* llama-2-70b-chat-fp16\n",
-    "* codellama-7b-instruct-fp16\n",
-    "* codellama-13b-instruct-fp16\n",
-    "* codellama-34b-instruct-fp16\n",
-    "* codellama-70b-instruct-fp16\n",
+    "* llama-2-13b-chat\n",
+    "* llama-2-70b-chat\n",
+    "* codellama-7b-instruct\n",
+    "* codellama-13b-instruct\n",
+    "* codellama-34b-instruct\n",
+    "* codellama-70b-instruct\n",
     "\n",
     "If you using local Llama, just set llm accordingly - see the [HelloLlamaLocal notebook](HelloLlamaLocal.ipynb)"
    ]