|
@@ -121,12 +121,12 @@
|
|
|
"Next we call the Llama 2 model from OctoAI. In this example we will use the Llama 2 13b chat FP16 model. You can find more on Llama 2 models on the [OctoAI text generation solution page](https://octoai.cloud/tools/text).\n",
|
|
|
"\n",
|
|
|
"At the time of writing this notebook the following Llama models are available on OctoAI:\n",
|
|
|
- "* llama-2-13b-chat-fp16\n",
|
|
|
- "* llama-2-70b-chat-fp16\n",
|
|
|
- "* codellama-7b-instruct-fp16\n",
|
|
|
- "* codellama-13b-instruct-fp16\n",
|
|
|
- "* codellama-34b-instruct-fp16\n",
|
|
|
- "* codellama-70b-instruct-fp16\n",
|
|
|
+ "* llama-2-13b-chat\n",
|
|
|
+ "* llama-2-70b-chat\n",
|
|
|
+ "* codellama-7b-instruct\n",
|
|
|
+ "* codellama-13b-instruct\n",
|
|
|
+ "* codellama-34b-instruct\n",
|
|
|
+ "* codellama-70b-instruct\n",
|
|
|
"\n",
|
|
|
"If you using local Llama, just set llm accordingly - see the [HelloLlamaLocal notebook](HelloLlamaLocal.ipynb)"
|
|
|
]
|