|
@@ -6,9 +6,9 @@
|
|
|
"metadata": {},
|
|
|
"source": [
|
|
|
"## This demo app shows:\n",
|
|
|
- "* how to run Llama2 locally on a Mac using llama-cpp-python and the llama-cpp's quantized Llama2 model;\n",
|
|
|
- "* how to use LangChain to ask Llama general questions;\n",
|
|
|
- "* how to use LangChain to load a recent PDF doc - the Llama2 paper pdf - and ask questions about it. This is the well known RAG (Retrieval Augmented Generation) method to let LLM such as Llama2 be able to answer questions about the data not publicly available when Llama2 was trained, or about your own data. RAG is one way to prevent LLM's hallucination. "
|
|
|
+ "* How to run Llama2 locally on a Mac using llama-cpp-python and the llama-cpp's quantized Llama2 model\n",
|
|
|
+ "* How to use LangChain to ask Llama general questions\n",
|
|
|
+ "* How to use LangChain to load a recent PDF doc - the Llama2 paper pdf - and ask questions about it. This is the well known RAG (Retrieval Augmented Generation) method to let LLM such as Llama2 be able to answer questions about the data not publicly available when Llama2 was trained, or about your own data. RAG is one way to prevent LLM's hallucination"
|
|
|
]
|
|
|
},
|
|
|
{
|