瀏覽代碼

update HelloLlamaCloud.ipynb for Llama 3

Jeff Tang 6 月之前
父節點
當前提交
b1aad34f85

+ 1 - 1
recipes/quickstart/Running_Llama2_Anywhere/Running_Llama_on_Mac_Windows_Linux.ipynb

@@ -42,7 +42,7 @@
     "\n",
     "Then you can run `ollama run llama3` and ask Llama 3 questions such as \"who wrote the book godfather?\" or \"who wrote the book godfather? answer in one sentence.\" You can also try `ollama run llama3:70b`, but the inference speed will most likely be too slow - for example, on an Apple M1 Pro with 32GB RAM, it takes over 10 seconds to generate one token (vs over 10 tokens per second with Llama 3 7b chat).\n",
     "\n",
-    "You can also run the following command to test Llama 3:\n",
+    "You can also run the following command to test Llama 3 (7b chat):\n",
     "```\n",
     " curl http://localhost:11434/api/chat -d '{\n",
     "  \"model\": \"llama3\",\n",

File diff suppressed because it is too large
+ 62 - 98
recipes/use_cases/RAG/HelloLlamaCloud.ipynb


File diff suppressed because it is too large
+ 0 - 347
recipes/use_cases/RAG/HelloLlamaLocal.ipynb


二進制
recipes/use_cases/RAG/llama2.pdf